Jan 23 17:31:29.359515 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 17:31:29.359538 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Jan 23 15:38:20 -00 2026 Jan 23 17:31:29.359548 kernel: KASLR enabled Jan 23 17:31:29.359554 kernel: efi: EFI v2.7 by EDK II Jan 23 17:31:29.359560 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438351218 Jan 23 17:31:29.359566 kernel: random: crng init done Jan 23 17:31:29.359573 kernel: secureboot: Secure boot disabled Jan 23 17:31:29.359579 kernel: ACPI: Early table checksum verification disabled Jan 23 17:31:29.359585 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Jan 23 17:31:29.359593 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Jan 23 17:31:29.359599 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359605 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359611 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359617 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359626 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359633 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359639 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359646 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359652 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359659 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:31:29.359665 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 17:31:29.359671 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 23 17:31:29.359678 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:31:29.359685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Jan 23 17:31:29.359692 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Jan 23 17:31:29.359698 kernel: Zone ranges: Jan 23 17:31:29.359705 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:31:29.359711 kernel: DMA32 empty Jan 23 17:31:29.359717 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Jan 23 17:31:29.359724 kernel: Device empty Jan 23 17:31:29.359730 kernel: Movable zone start for each node Jan 23 17:31:29.359736 kernel: Early memory node ranges Jan 23 17:31:29.359743 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Jan 23 17:31:29.359749 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Jan 23 17:31:29.359771 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Jan 23 17:31:29.359779 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Jan 23 17:31:29.359786 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Jan 23 17:31:29.359792 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Jan 23 17:31:29.359799 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 23 17:31:29.359805 kernel: psci: probing for conduit method from ACPI. Jan 23 17:31:29.359815 kernel: psci: PSCIv1.3 detected in firmware. Jan 23 17:31:29.359823 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:31:29.359830 kernel: psci: Trusted OS migration not required Jan 23 17:31:29.359836 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:31:29.359843 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 23 17:31:29.359850 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 17:31:29.359857 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 17:31:29.359864 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Jan 23 17:31:29.359871 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Jan 23 17:31:29.359879 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:31:29.359886 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:31:29.359893 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 23 17:31:29.359900 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:31:29.359907 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:31:29.359914 kernel: CPU features: detected: Spectre-v4 Jan 23 17:31:29.359920 kernel: CPU features: detected: Spectre-BHB Jan 23 17:31:29.359927 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 17:31:29.359934 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 17:31:29.359941 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 17:31:29.359947 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 17:31:29.359955 kernel: alternatives: applying boot alternatives Jan 23 17:31:29.359963 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=35f959b0e84cd72dec35dcaa9fdae098b059a7436b8ff34bc604c87ac6375079 Jan 23 17:31:29.359970 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 23 17:31:29.359977 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 17:31:29.359984 kernel: Fallback order for Node 0: 0 Jan 23 17:31:29.359990 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Jan 23 17:31:29.359997 kernel: Policy zone: Normal Jan 23 17:31:29.360004 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:31:29.360010 kernel: software IO TLB: area num 4. Jan 23 17:31:29.360017 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 23 17:31:29.360025 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 17:31:29.360032 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:31:29.360040 kernel: rcu: RCU event tracing is enabled. Jan 23 17:31:29.360047 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 17:31:29.360054 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:31:29.360061 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:31:29.360068 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:31:29.360075 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 17:31:29.360081 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 17:31:29.360088 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 17:31:29.360095 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:31:29.360103 kernel: GICv3: 256 SPIs implemented Jan 23 17:31:29.360110 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:31:29.360117 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:31:29.360123 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 23 17:31:29.360130 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:31:29.360137 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 23 17:31:29.360144 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 23 17:31:29.360151 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:31:29.360158 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:31:29.360165 kernel: GICv3: using LPI property table @0x0000000100130000 Jan 23 17:31:29.360171 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Jan 23 17:31:29.360178 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:31:29.360186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:31:29.360193 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 17:31:29.360200 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 17:31:29.360208 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 17:31:29.360214 kernel: arm-pv: using stolen time PV Jan 23 17:31:29.360234 kernel: Console: colour dummy device 80x25 Jan 23 17:31:29.360242 kernel: ACPI: Core revision 20240827 Jan 23 17:31:29.360252 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 17:31:29.360263 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:31:29.360271 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:31:29.360279 kernel: landlock: Up and running. Jan 23 17:31:29.360287 kernel: SELinux: Initializing. Jan 23 17:31:29.360294 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:31:29.360301 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:31:29.360308 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:31:29.360316 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:31:29.360324 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:31:29.360332 kernel: Remapping and enabling EFI services. Jan 23 17:31:29.360339 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:31:29.360346 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:31:29.360353 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 23 17:31:29.360360 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Jan 23 17:31:29.360367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:31:29.360376 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 17:31:29.360383 kernel: Detected PIPT I-cache on CPU2 Jan 23 17:31:29.360395 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 23 17:31:29.360403 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Jan 23 17:31:29.360411 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:31:29.360418 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 23 17:31:29.360426 kernel: Detected PIPT I-cache on CPU3 Jan 23 17:31:29.360434 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 23 17:31:29.360443 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Jan 23 17:31:29.360450 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:31:29.360457 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 23 17:31:29.360465 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 17:31:29.360472 kernel: SMP: Total of 4 processors activated. Jan 23 17:31:29.360480 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:31:29.360489 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:31:29.360496 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 17:31:29.360504 kernel: CPU features: detected: Common not Private translations Jan 23 17:31:29.360511 kernel: CPU features: detected: CRC32 instructions Jan 23 17:31:29.360519 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 23 17:31:29.360527 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 17:31:29.360537 kernel: CPU features: detected: LSE atomic instructions Jan 23 17:31:29.360547 kernel: CPU features: detected: Privileged Access Never Jan 23 17:31:29.360554 kernel: CPU features: detected: RAS Extension Support Jan 23 17:31:29.360562 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 17:31:29.360570 kernel: alternatives: applying system-wide alternatives Jan 23 17:31:29.360578 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jan 23 17:31:29.360586 kernel: Memory: 16323608K/16777216K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 430000K reserved, 16384K cma-reserved) Jan 23 17:31:29.360593 kernel: devtmpfs: initialized Jan 23 17:31:29.360602 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:31:29.360610 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 17:31:29.360617 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 17:31:29.360625 kernel: 0 pages in range for non-PLT usage Jan 23 17:31:29.360632 kernel: 515168 pages in range for PLT usage Jan 23 17:31:29.360640 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:31:29.360647 kernel: SMBIOS 3.0.0 present. Jan 23 17:31:29.360655 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jan 23 17:31:29.360663 kernel: DMI: Memory slots populated: 1/1 Jan 23 17:31:29.360671 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:31:29.360679 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:31:29.360686 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:31:29.360694 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:31:29.360701 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:31:29.360709 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:31:29.360718 kernel: cpuidle: using governor menu Jan 23 17:31:29.360726 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:31:29.360733 kernel: audit: type=2000 audit(0.050:1): state=initialized audit_enabled=0 res=1 Jan 23 17:31:29.360741 kernel: ASID allocator initialised with 32768 entries Jan 23 17:31:29.360748 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:31:29.360764 kernel: Serial: AMBA PL011 UART driver Jan 23 17:31:29.360772 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:31:29.360781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:31:29.360789 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:31:29.360796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:31:29.360804 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:31:29.360811 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:31:29.360819 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:31:29.360836 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:31:29.360844 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:31:29.360853 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:31:29.360861 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:31:29.360869 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:31:29.360876 kernel: ACPI: Interpreter enabled Jan 23 17:31:29.360884 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:31:29.360891 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:31:29.360899 kernel: ACPI: CPU0 has been hot-added Jan 23 17:31:29.360907 kernel: ACPI: CPU1 has been hot-added Jan 23 17:31:29.360915 kernel: ACPI: CPU2 has been hot-added Jan 23 17:31:29.360922 kernel: ACPI: CPU3 has been hot-added Jan 23 17:31:29.360930 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 23 17:31:29.360937 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 17:31:29.360945 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 17:31:29.361103 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:31:29.361191 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:31:29.361272 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:31:29.361352 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 23 17:31:29.361431 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 23 17:31:29.361441 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 23 17:31:29.361449 kernel: PCI host bridge to bus 0000:00 Jan 23 17:31:29.361538 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 23 17:31:29.361611 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:31:29.361683 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 23 17:31:29.361771 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 17:31:29.361867 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:31:29.361962 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.362047 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Jan 23 17:31:29.362126 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 23 17:31:29.362205 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Jan 23 17:31:29.362284 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 23 17:31:29.362370 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.362452 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Jan 23 17:31:29.362530 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 23 17:31:29.362608 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Jan 23 17:31:29.362693 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.362788 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Jan 23 17:31:29.362872 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 23 17:31:29.362950 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Jan 23 17:31:29.363029 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 23 17:31:29.363115 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.363195 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Jan 23 17:31:29.363300 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 23 17:31:29.363389 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 23 17:31:29.363477 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.363557 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Jan 23 17:31:29.363636 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 23 17:31:29.363716 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Jan 23 17:31:29.363808 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 23 17:31:29.363903 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.363983 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Jan 23 17:31:29.364064 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 23 17:31:29.364145 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Jan 23 17:31:29.364227 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 23 17:31:29.364316 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.364420 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Jan 23 17:31:29.364502 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 23 17:31:29.364617 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.364701 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Jan 23 17:31:29.364792 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 23 17:31:29.364899 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.364986 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Jan 23 17:31:29.365065 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 23 17:31:29.365149 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.365228 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Jan 23 17:31:29.365308 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 23 17:31:29.365399 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.365479 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Jan 23 17:31:29.365557 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 23 17:31:29.365643 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.365723 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Jan 23 17:31:29.365820 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 23 17:31:29.365909 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.365992 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Jan 23 17:31:29.366071 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 23 17:31:29.366156 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.366235 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Jan 23 17:31:29.366316 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 23 17:31:29.366400 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.366480 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Jan 23 17:31:29.366560 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 23 17:31:29.366648 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.366731 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Jan 23 17:31:29.366824 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 23 17:31:29.366913 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.366993 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Jan 23 17:31:29.367072 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 23 17:31:29.367157 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.367240 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Jan 23 17:31:29.367318 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 23 17:31:29.367396 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Jan 23 17:31:29.367475 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Jan 23 17:31:29.367559 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.367637 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Jan 23 17:31:29.367717 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 23 17:31:29.367816 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Jan 23 17:31:29.367899 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Jan 23 17:31:29.367984 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.368063 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Jan 23 17:31:29.368141 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 23 17:31:29.368221 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Jan 23 17:31:29.368298 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Jan 23 17:31:29.368382 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.368461 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Jan 23 17:31:29.368539 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 23 17:31:29.368617 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Jan 23 17:31:29.368697 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Jan 23 17:31:29.368793 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.368887 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Jan 23 17:31:29.368968 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 23 17:31:29.369047 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Jan 23 17:31:29.369124 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Jan 23 17:31:29.369211 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.369290 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Jan 23 17:31:29.369392 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 23 17:31:29.369479 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Jan 23 17:31:29.369558 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Jan 23 17:31:29.369643 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.369725 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Jan 23 17:31:29.369818 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 23 17:31:29.369899 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Jan 23 17:31:29.369978 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Jan 23 17:31:29.370070 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.370152 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Jan 23 17:31:29.370235 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 23 17:31:29.370317 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Jan 23 17:31:29.370398 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:31:29.370484 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.370562 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Jan 23 17:31:29.370640 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 23 17:31:29.370718 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Jan 23 17:31:29.370808 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:31:29.370895 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.370975 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Jan 23 17:31:29.371053 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 23 17:31:29.371131 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Jan 23 17:31:29.371213 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:31:29.371300 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.371380 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Jan 23 17:31:29.371459 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 23 17:31:29.371538 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Jan 23 17:31:29.371618 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:31:29.371705 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.371800 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Jan 23 17:31:29.371882 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 23 17:31:29.371965 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Jan 23 17:31:29.372044 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:31:29.372131 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.372212 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Jan 23 17:31:29.372291 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 23 17:31:29.372371 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Jan 23 17:31:29.372454 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:31:29.372539 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.372619 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Jan 23 17:31:29.372699 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 23 17:31:29.372806 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Jan 23 17:31:29.372917 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:31:29.373015 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.373116 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Jan 23 17:31:29.373197 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 23 17:31:29.373278 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Jan 23 17:31:29.373356 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:31:29.373441 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:31:29.373522 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Jan 23 17:31:29.373600 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 23 17:31:29.373678 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Jan 23 17:31:29.373769 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:31:29.373872 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 17:31:29.373956 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Jan 23 17:31:29.374051 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 23 17:31:29.374136 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 17:31:29.374228 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 17:31:29.374311 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Jan 23 17:31:29.374398 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 23 17:31:29.374481 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Jan 23 17:31:29.374562 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 23 17:31:29.374656 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:31:29.374743 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 23 17:31:29.374858 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:31:29.374955 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Jan 23 17:31:29.375039 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 23 17:31:29.375134 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Jan 23 17:31:29.375217 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Jan 23 17:31:29.375303 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 23 17:31:29.375387 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 23 17:31:29.375474 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:31:29.375583 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:31:29.375669 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 23 17:31:29.375749 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 23 17:31:29.375847 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 23 17:31:29.375931 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 17:31:29.376011 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:31:29.376090 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:31:29.376173 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 17:31:29.376254 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 23 17:31:29.376333 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 23 17:31:29.376415 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 17:31:29.376495 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:31:29.376574 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:31:29.376655 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 17:31:29.376737 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:31:29.376846 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:31:29.376939 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 17:31:29.377018 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Jan 23 17:31:29.377097 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Jan 23 17:31:29.377181 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 17:31:29.377264 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:31:29.377342 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:31:29.377424 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 17:31:29.377503 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:31:29.377582 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:31:29.377666 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 23 17:31:29.377746 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Jan 23 17:31:29.377857 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Jan 23 17:31:29.377943 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 23 17:31:29.378023 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 23 17:31:29.378101 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Jan 23 17:31:29.378187 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 23 17:31:29.378266 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Jan 23 17:31:29.378345 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Jan 23 17:31:29.378427 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 23 17:31:29.378508 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Jan 23 17:31:29.378587 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Jan 23 17:31:29.378672 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 23 17:31:29.378765 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Jan 23 17:31:29.378925 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Jan 23 17:31:29.379015 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 23 17:31:29.379095 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Jan 23 17:31:29.379179 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Jan 23 17:31:29.379262 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 23 17:31:29.379341 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Jan 23 17:31:29.379419 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Jan 23 17:31:29.379502 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 23 17:31:29.379581 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Jan 23 17:31:29.379662 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Jan 23 17:31:29.379745 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 23 17:31:29.379853 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Jan 23 17:31:29.379935 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Jan 23 17:31:29.380018 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 23 17:31:29.380102 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Jan 23 17:31:29.380182 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Jan 23 17:31:29.380265 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 23 17:31:29.380345 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Jan 23 17:31:29.380424 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Jan 23 17:31:29.380508 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 23 17:31:29.380591 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Jan 23 17:31:29.380670 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Jan 23 17:31:29.380760 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 23 17:31:29.380866 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Jan 23 17:31:29.380949 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Jan 23 17:31:29.381033 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 23 17:31:29.381116 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Jan 23 17:31:29.381195 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Jan 23 17:31:29.381277 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 23 17:31:29.381357 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Jan 23 17:31:29.381436 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Jan 23 17:31:29.381521 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 23 17:31:29.381601 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Jan 23 17:31:29.381680 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Jan 23 17:31:29.381773 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 23 17:31:29.381858 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Jan 23 17:31:29.381937 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Jan 23 17:31:29.382022 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 23 17:31:29.382105 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Jan 23 17:31:29.382183 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Jan 23 17:31:29.382266 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 23 17:31:29.382347 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Jan 23 17:31:29.382426 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Jan 23 17:31:29.382508 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 23 17:31:29.382587 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Jan 23 17:31:29.382666 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Jan 23 17:31:29.382748 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 23 17:31:29.382843 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Jan 23 17:31:29.382926 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Jan 23 17:31:29.383008 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 23 17:31:29.383087 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Jan 23 17:31:29.383165 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Jan 23 17:31:29.383247 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 23 17:31:29.383326 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Jan 23 17:31:29.383408 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Jan 23 17:31:29.383489 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 23 17:31:29.383568 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Jan 23 17:31:29.383647 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Jan 23 17:31:29.383728 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 23 17:31:29.383831 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 23 17:31:29.383915 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 23 17:31:29.383996 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 23 17:31:29.384080 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 23 17:31:29.384165 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 23 17:31:29.384246 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 23 17:31:29.384324 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 23 17:31:29.384409 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 23 17:31:29.384490 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 23 17:31:29.384571 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 23 17:31:29.384652 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 23 17:31:29.384732 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 23 17:31:29.384833 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 23 17:31:29.384927 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 23 17:31:29.385009 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 23 17:31:29.385090 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 23 17:31:29.385171 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 23 17:31:29.385252 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Jan 23 17:31:29.385334 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Jan 23 17:31:29.385417 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Jan 23 17:31:29.385498 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Jan 23 17:31:29.385582 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Jan 23 17:31:29.385665 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Jan 23 17:31:29.385749 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Jan 23 17:31:29.385852 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Jan 23 17:31:29.385935 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Jan 23 17:31:29.386037 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Jan 23 17:31:29.386121 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Jan 23 17:31:29.386201 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Jan 23 17:31:29.386282 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Jan 23 17:31:29.386361 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Jan 23 17:31:29.386441 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Jan 23 17:31:29.386522 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Jan 23 17:31:29.386603 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Jan 23 17:31:29.386682 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Jan 23 17:31:29.386776 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Jan 23 17:31:29.386892 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Jan 23 17:31:29.386992 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Jan 23 17:31:29.387079 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Jan 23 17:31:29.387174 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Jan 23 17:31:29.387254 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Jan 23 17:31:29.387336 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Jan 23 17:31:29.387415 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Jan 23 17:31:29.387495 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Jan 23 17:31:29.387574 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Jan 23 17:31:29.387674 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Jan 23 17:31:29.387768 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Jan 23 17:31:29.387858 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Jan 23 17:31:29.387938 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Jan 23 17:31:29.388019 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Jan 23 17:31:29.388097 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Jan 23 17:31:29.388180 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Jan 23 17:31:29.388260 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Jan 23 17:31:29.388340 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Jan 23 17:31:29.388420 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Jan 23 17:31:29.388499 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Jan 23 17:31:29.388578 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Jan 23 17:31:29.388660 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Jan 23 17:31:29.388739 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Jan 23 17:31:29.388860 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Jan 23 17:31:29.388947 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Jan 23 17:31:29.389030 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Jan 23 17:31:29.389110 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Jan 23 17:31:29.389193 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Jan 23 17:31:29.389281 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Jan 23 17:31:29.389362 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Jan 23 17:31:29.389450 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 17:31:29.389532 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Jan 23 17:31:29.389613 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 17:31:29.389692 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Jan 23 17:31:29.389787 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 17:31:29.389874 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Jan 23 17:31:29.389954 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 17:31:29.390034 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Jan 23 17:31:29.390139 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 17:31:29.390221 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Jan 23 17:31:29.390300 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 17:31:29.390384 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Jan 23 17:31:29.390464 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 17:31:29.390545 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Jan 23 17:31:29.390625 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 17:31:29.390705 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Jan 23 17:31:29.390798 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Jan 23 17:31:29.390885 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Jan 23 17:31:29.390964 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Jan 23 17:31:29.391061 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Jan 23 17:31:29.391147 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Jan 23 17:31:29.391227 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Jan 23 17:31:29.391308 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Jan 23 17:31:29.391389 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Jan 23 17:31:29.391472 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Jan 23 17:31:29.391552 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Jan 23 17:31:29.391633 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Jan 23 17:31:29.391715 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Jan 23 17:31:29.391817 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Jan 23 17:31:29.391901 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Jan 23 17:31:29.391982 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.392063 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.392144 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Jan 23 17:31:29.392225 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.392308 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.392390 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Jan 23 17:31:29.392474 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.392558 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.392645 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Jan 23 17:31:29.392760 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.392867 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.392979 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Jan 23 17:31:29.393062 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.393141 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.393221 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Jan 23 17:31:29.393301 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.393381 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.393465 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Jan 23 17:31:29.393545 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.393641 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.393726 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Jan 23 17:31:29.393831 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.393916 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.394002 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Jan 23 17:31:29.394106 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.394188 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.394272 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Jan 23 17:31:29.394353 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.394432 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.394513 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Jan 23 17:31:29.394595 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.394676 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.394771 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Jan 23 17:31:29.394856 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.394939 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.395024 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Jan 23 17:31:29.395115 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.395199 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.395282 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Jan 23 17:31:29.395361 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.395440 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.395523 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Jan 23 17:31:29.395608 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.395689 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.395778 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Jan 23 17:31:29.395859 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.395939 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.396020 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Jan 23 17:31:29.396107 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.396188 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.396270 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Jan 23 17:31:29.396351 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.396429 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.396509 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 17:31:29.396588 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Jan 23 17:31:29.396669 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Jan 23 17:31:29.396748 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Jan 23 17:31:29.396864 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 17:31:29.396951 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Jan 23 17:31:29.397031 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Jan 23 17:31:29.397111 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Jan 23 17:31:29.397209 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Jan 23 17:31:29.397291 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Jan 23 17:31:29.397372 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Jan 23 17:31:29.397452 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Jan 23 17:31:29.397531 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Jan 23 17:31:29.397614 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Jan 23 17:31:29.397693 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Jan 23 17:31:29.397795 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.397877 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.397959 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.398039 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.398119 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.398198 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.398276 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.398355 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.398434 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.398515 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.398594 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.398673 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.398762 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.398867 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.398954 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.399037 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.399117 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.399195 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.399275 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.399354 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.399434 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.399516 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.399596 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.399675 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.399769 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.399856 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.399937 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.400019 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.400100 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.400179 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.400260 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.400339 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.400422 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.400501 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.400581 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:31:29.400658 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:31:29.400745 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 23 17:31:29.400866 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 23 17:31:29.400955 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 23 17:31:29.401040 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 23 17:31:29.401122 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:31:29.401201 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:31:29.401289 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 23 17:31:29.401368 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 23 17:31:29.401449 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:31:29.401529 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:31:29.401614 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 23 17:31:29.401696 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 23 17:31:29.401790 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 23 17:31:29.401871 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:31:29.401952 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:31:29.402037 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 23 17:31:29.402117 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 23 17:31:29.402196 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:31:29.402274 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:31:29.402358 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 23 17:31:29.402442 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 23 17:31:29.402520 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 23 17:31:29.402599 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:31:29.402679 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:31:29.402797 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 23 17:31:29.402903 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 23 17:31:29.402987 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 23 17:31:29.403068 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:31:29.403147 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:31:29.403226 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 23 17:31:29.403305 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:31:29.403385 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:31:29.403466 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 23 17:31:29.403545 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:31:29.403624 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:31:29.403704 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 23 17:31:29.403792 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:31:29.403873 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:31:29.403954 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 23 17:31:29.404033 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Jan 23 17:31:29.404112 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Jan 23 17:31:29.404192 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 23 17:31:29.404270 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Jan 23 17:31:29.404351 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Jan 23 17:31:29.404430 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 23 17:31:29.404510 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Jan 23 17:31:29.404589 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Jan 23 17:31:29.404668 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 23 17:31:29.404746 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Jan 23 17:31:29.404864 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Jan 23 17:31:29.404947 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 23 17:31:29.405027 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Jan 23 17:31:29.405106 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 23 17:31:29.405185 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 23 17:31:29.405267 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Jan 23 17:31:29.405346 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 23 17:31:29.405426 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 23 17:31:29.405505 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Jan 23 17:31:29.405584 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 23 17:31:29.405666 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 23 17:31:29.405745 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Jan 23 17:31:29.405837 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Jan 23 17:31:29.405917 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 23 17:31:29.405997 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Jan 23 17:31:29.406076 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Jan 23 17:31:29.406158 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 23 17:31:29.406237 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Jan 23 17:31:29.406316 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Jan 23 17:31:29.406394 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Jan 23 17:31:29.406474 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 23 17:31:29.406553 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Jan 23 17:31:29.406633 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Jan 23 17:31:29.406712 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Jan 23 17:31:29.406809 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 23 17:31:29.406890 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Jan 23 17:31:29.406969 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Jan 23 17:31:29.407048 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Jan 23 17:31:29.407127 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 23 17:31:29.407210 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Jan 23 17:31:29.407288 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Jan 23 17:31:29.407367 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 23 17:31:29.407447 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 23 17:31:29.407525 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Jan 23 17:31:29.407603 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Jan 23 17:31:29.407681 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 23 17:31:29.407777 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 23 17:31:29.407861 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Jan 23 17:31:29.407940 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Jan 23 17:31:29.408018 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 23 17:31:29.408097 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 23 17:31:29.408177 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Jan 23 17:31:29.408259 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Jan 23 17:31:29.408337 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Jan 23 17:31:29.408418 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 23 17:31:29.408497 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Jan 23 17:31:29.408575 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Jan 23 17:31:29.408655 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Jan 23 17:31:29.408735 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 23 17:31:29.408840 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Jan 23 17:31:29.408926 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Jan 23 17:31:29.409006 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Jan 23 17:31:29.409086 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 23 17:31:29.409164 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Jan 23 17:31:29.409243 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Jan 23 17:31:29.409321 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Jan 23 17:31:29.409404 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 23 17:31:29.409485 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Jan 23 17:31:29.409564 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Jan 23 17:31:29.409642 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Jan 23 17:31:29.409721 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 23 17:31:29.409821 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Jan 23 17:31:29.409905 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Jan 23 17:31:29.409983 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 23 17:31:29.410064 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 23 17:31:29.410142 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Jan 23 17:31:29.410221 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Jan 23 17:31:29.410299 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 23 17:31:29.410380 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 23 17:31:29.410461 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Jan 23 17:31:29.410539 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Jan 23 17:31:29.410618 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 23 17:31:29.410697 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 23 17:31:29.410787 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Jan 23 17:31:29.410868 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Jan 23 17:31:29.410969 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Jan 23 17:31:29.411059 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 23 17:31:29.411133 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:31:29.411205 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 23 17:31:29.411289 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 23 17:31:29.411364 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:31:29.411448 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 23 17:31:29.411522 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:31:29.411619 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 23 17:31:29.411699 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:31:29.411799 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 23 17:31:29.411875 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:31:29.411961 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 23 17:31:29.412035 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:31:29.412115 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 23 17:31:29.412190 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:31:29.412271 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 23 17:31:29.412348 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:31:29.412430 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 23 17:31:29.412505 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:31:29.412585 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 23 17:31:29.412659 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:31:29.412741 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Jan 23 17:31:29.412864 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Jan 23 17:31:29.412951 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Jan 23 17:31:29.413029 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Jan 23 17:31:29.413117 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Jan 23 17:31:29.413199 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Jan 23 17:31:29.413281 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Jan 23 17:31:29.413377 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Jan 23 17:31:29.413460 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Jan 23 17:31:29.413537 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 23 17:31:29.413618 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Jan 23 17:31:29.413692 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 23 17:31:29.413783 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Jan 23 17:31:29.413859 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 23 17:31:29.413939 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Jan 23 17:31:29.414013 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Jan 23 17:31:29.414095 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Jan 23 17:31:29.414169 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Jan 23 17:31:29.414255 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Jan 23 17:31:29.414334 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Jan 23 17:31:29.414413 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Jan 23 17:31:29.414495 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Jan 23 17:31:29.414570 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Jan 23 17:31:29.414643 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Jan 23 17:31:29.414728 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Jan 23 17:31:29.414817 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Jan 23 17:31:29.414913 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Jan 23 17:31:29.414994 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Jan 23 17:31:29.415068 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Jan 23 17:31:29.415143 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 23 17:31:29.415222 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Jan 23 17:31:29.415298 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Jan 23 17:31:29.415372 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 23 17:31:29.415451 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Jan 23 17:31:29.415525 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Jan 23 17:31:29.415598 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 23 17:31:29.415678 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Jan 23 17:31:29.415770 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Jan 23 17:31:29.415850 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Jan 23 17:31:29.415931 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Jan 23 17:31:29.416005 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Jan 23 17:31:29.416078 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Jan 23 17:31:29.416157 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 23 17:31:29.416233 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Jan 23 17:31:29.416306 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Jan 23 17:31:29.416385 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Jan 23 17:31:29.416459 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Jan 23 17:31:29.416532 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Jan 23 17:31:29.416612 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Jan 23 17:31:29.416686 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Jan 23 17:31:29.416771 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Jan 23 17:31:29.416874 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Jan 23 17:31:29.416951 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Jan 23 17:31:29.417024 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 23 17:31:29.417108 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Jan 23 17:31:29.417182 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Jan 23 17:31:29.417256 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 23 17:31:29.417336 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Jan 23 17:31:29.417412 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Jan 23 17:31:29.417486 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 23 17:31:29.417569 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Jan 23 17:31:29.417645 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Jan 23 17:31:29.417719 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Jan 23 17:31:29.417729 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:31:29.417737 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:31:29.417745 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:31:29.417770 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:31:29.417779 kernel: iommu: Default domain type: Translated Jan 23 17:31:29.417803 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:31:29.417811 kernel: efivars: Registered efivars operations Jan 23 17:31:29.417819 kernel: vgaarb: loaded Jan 23 17:31:29.417827 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:31:29.417835 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:31:29.417846 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:31:29.417854 kernel: pnp: PnP ACPI init Jan 23 17:31:29.417955 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 23 17:31:29.417967 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:31:29.417975 kernel: NET: Registered PF_INET protocol family Jan 23 17:31:29.417983 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:31:29.417991 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Jan 23 17:31:29.418001 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:31:29.418009 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 17:31:29.418017 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 17:31:29.418025 kernel: TCP: Hash tables configured (established 131072 bind 65536) Jan 23 17:31:29.418033 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 23 17:31:29.418042 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 23 17:31:29.418050 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:31:29.418139 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 23 17:31:29.418156 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:31:29.418164 kernel: kvm [1]: HYP mode not available Jan 23 17:31:29.418172 kernel: Initialise system trusted keyrings Jan 23 17:31:29.418180 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Jan 23 17:31:29.418188 kernel: Key type asymmetric registered Jan 23 17:31:29.418196 kernel: Asymmetric key parser 'x509' registered Jan 23 17:31:29.418206 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:31:29.418215 kernel: io scheduler mq-deadline registered Jan 23 17:31:29.418223 kernel: io scheduler kyber registered Jan 23 17:31:29.418230 kernel: io scheduler bfq registered Jan 23 17:31:29.418239 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:31:29.418324 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Jan 23 17:31:29.418404 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Jan 23 17:31:29.418487 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.418568 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Jan 23 17:31:29.418649 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Jan 23 17:31:29.418728 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.418825 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Jan 23 17:31:29.418915 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Jan 23 17:31:29.419000 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.419081 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Jan 23 17:31:29.419174 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Jan 23 17:31:29.419257 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.419338 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Jan 23 17:31:29.419418 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Jan 23 17:31:29.419506 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.419588 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Jan 23 17:31:29.419668 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Jan 23 17:31:29.419750 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.419846 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Jan 23 17:31:29.419926 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Jan 23 17:31:29.420007 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.420091 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Jan 23 17:31:29.420181 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Jan 23 17:31:29.420267 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.420279 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:31:29.420364 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Jan 23 17:31:29.420445 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Jan 23 17:31:29.420528 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.420609 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Jan 23 17:31:29.420688 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Jan 23 17:31:29.420782 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.420880 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Jan 23 17:31:29.420968 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Jan 23 17:31:29.421053 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.421137 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Jan 23 17:31:29.421226 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Jan 23 17:31:29.421309 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.421389 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Jan 23 17:31:29.421471 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Jan 23 17:31:29.421553 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.421636 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Jan 23 17:31:29.421717 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Jan 23 17:31:29.421812 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.421896 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Jan 23 17:31:29.421975 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Jan 23 17:31:29.422056 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.422148 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Jan 23 17:31:29.422228 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Jan 23 17:31:29.422315 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.422327 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 23 17:31:29.422406 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Jan 23 17:31:29.422490 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Jan 23 17:31:29.422573 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.422654 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Jan 23 17:31:29.422738 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Jan 23 17:31:29.422836 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.422920 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Jan 23 17:31:29.423001 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Jan 23 17:31:29.423085 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.423170 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Jan 23 17:31:29.423249 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Jan 23 17:31:29.423328 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.423408 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Jan 23 17:31:29.423486 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Jan 23 17:31:29.423566 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.423648 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Jan 23 17:31:29.423727 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Jan 23 17:31:29.423817 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.423898 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Jan 23 17:31:29.423976 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Jan 23 17:31:29.424054 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.424139 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Jan 23 17:31:29.424221 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Jan 23 17:31:29.424301 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.424312 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:31:29.424390 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Jan 23 17:31:29.424471 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Jan 23 17:31:29.424553 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.424636 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Jan 23 17:31:29.424715 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Jan 23 17:31:29.424809 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.424907 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Jan 23 17:31:29.424990 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Jan 23 17:31:29.425069 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.425155 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Jan 23 17:31:29.425235 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Jan 23 17:31:29.425318 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.425401 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Jan 23 17:31:29.425481 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Jan 23 17:31:29.425563 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.425647 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Jan 23 17:31:29.425727 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Jan 23 17:31:29.425819 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.425902 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Jan 23 17:31:29.425982 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Jan 23 17:31:29.426063 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.426146 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Jan 23 17:31:29.426228 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Jan 23 17:31:29.426309 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.426390 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Jan 23 17:31:29.426471 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Jan 23 17:31:29.426551 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:31:29.426562 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:31:29.426572 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:31:29.426659 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Jan 23 17:31:29.426746 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 23 17:31:29.426767 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:31:29.426776 kernel: thunder_xcv, ver 1.0 Jan 23 17:31:29.426784 kernel: thunder_bgx, ver 1.0 Jan 23 17:31:29.426792 kernel: nicpf, ver 1.0 Jan 23 17:31:29.426802 kernel: nicvf, ver 1.0 Jan 23 17:31:29.426898 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:31:29.426979 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:31:28 UTC (1769189488) Jan 23 17:31:29.426989 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:31:29.426998 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 17:31:29.427007 kernel: watchdog: NMI not fully supported Jan 23 17:31:29.427017 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:31:29.427026 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:31:29.427034 kernel: Segment Routing with IPv6 Jan 23 17:31:29.427043 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:31:29.427051 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:31:29.427059 kernel: Key type dns_resolver registered Jan 23 17:31:29.427067 kernel: registered taskstats version 1 Jan 23 17:31:29.427075 kernel: Loading compiled-in X.509 certificates Jan 23 17:31:29.427085 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2bef814d3854848add18d21bd2681c3d03c60f56' Jan 23 17:31:29.427093 kernel: Demotion targets for Node 0: null Jan 23 17:31:29.427101 kernel: Key type .fscrypt registered Jan 23 17:31:29.427109 kernel: Key type fscrypt-provisioning registered Jan 23 17:31:29.427117 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:31:29.427125 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:31:29.427133 kernel: ima: No architecture policies found Jan 23 17:31:29.427142 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:31:29.427150 kernel: clk: Disabling unused clocks Jan 23 17:31:29.427159 kernel: PM: genpd: Disabling unused power domains Jan 23 17:31:29.427166 kernel: Freeing unused kernel memory: 12480K Jan 23 17:31:29.427174 kernel: Run /init as init process Jan 23 17:31:29.427182 kernel: with arguments: Jan 23 17:31:29.427191 kernel: /init Jan 23 17:31:29.427200 kernel: with environment: Jan 23 17:31:29.427207 kernel: HOME=/ Jan 23 17:31:29.427215 kernel: TERM=linux Jan 23 17:31:29.427223 kernel: ACPI: bus type USB registered Jan 23 17:31:29.427231 kernel: usbcore: registered new interface driver usbfs Jan 23 17:31:29.427239 kernel: usbcore: registered new interface driver hub Jan 23 17:31:29.427247 kernel: usbcore: registered new device driver usb Jan 23 17:31:29.427334 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:31:29.427416 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 17:31:29.427503 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 17:31:29.427585 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:31:29.427670 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 17:31:29.427762 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 17:31:29.427877 kernel: hub 1-0:1.0: USB hub found Jan 23 17:31:29.427980 kernel: hub 1-0:1.0: 4 ports detected Jan 23 17:31:29.428081 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 17:31:29.428180 kernel: hub 2-0:1.0: USB hub found Jan 23 17:31:29.428269 kernel: hub 2-0:1.0: 4 ports detected Jan 23 17:31:29.428362 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 17:31:29.428446 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 23 17:31:29.428457 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:31:29.428466 kernel: GPT:25804799 != 104857599 Jan 23 17:31:29.428475 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:31:29.428483 kernel: GPT:25804799 != 104857599 Jan 23 17:31:29.428491 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:31:29.428501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 17:31:29.428509 kernel: SCSI subsystem initialized Jan 23 17:31:29.428518 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:31:29.428526 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:31:29.428535 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:31:29.428543 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:31:29.428553 kernel: raid6: neonx8 gen() 15814 MB/s Jan 23 17:31:29.428586 kernel: raid6: neonx4 gen() 15625 MB/s Jan 23 17:31:29.428595 kernel: raid6: neonx2 gen() 13212 MB/s Jan 23 17:31:29.428604 kernel: raid6: neonx1 gen() 10420 MB/s Jan 23 17:31:29.428612 kernel: raid6: int64x8 gen() 6842 MB/s Jan 23 17:31:29.428620 kernel: raid6: int64x4 gen() 7365 MB/s Jan 23 17:31:29.428628 kernel: raid6: int64x2 gen() 6121 MB/s Jan 23 17:31:29.428636 kernel: raid6: int64x1 gen() 5052 MB/s Jan 23 17:31:29.428647 kernel: raid6: using algorithm neonx8 gen() 15814 MB/s Jan 23 17:31:29.428655 kernel: raid6: .... xor() 12076 MB/s, rmw enabled Jan 23 17:31:29.428663 kernel: raid6: using neon recovery algorithm Jan 23 17:31:29.428672 kernel: xor: measuring software checksum speed Jan 23 17:31:29.428682 kernel: 8regs : 21636 MB/sec Jan 23 17:31:29.428691 kernel: 32regs : 21693 MB/sec Jan 23 17:31:29.428701 kernel: arm64_neon : 28205 MB/sec Jan 23 17:31:29.428709 kernel: xor: using function: arm64_neon (28205 MB/sec) Jan 23 17:31:29.428717 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:31:29.428848 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 17:31:29.428864 kernel: BTRFS: device fsid 8d2a73a7-ed2a-4757-891b-9df844aa914e devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (275) Jan 23 17:31:29.428873 kernel: BTRFS info (device dm-0): first mount of filesystem 8d2a73a7-ed2a-4757-891b-9df844aa914e Jan 23 17:31:29.428881 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:31:29.428893 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:31:29.428901 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:31:29.428910 kernel: loop: module loaded Jan 23 17:31:29.428918 kernel: loop0: detected capacity change from 0 to 91840 Jan 23 17:31:29.428927 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:31:29.428936 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:31:29.428949 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:31:29.428959 systemd[1]: Detected virtualization kvm. Jan 23 17:31:29.429065 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 23 17:31:29.429078 systemd[1]: Detected architecture arm64. Jan 23 17:31:29.429087 systemd[1]: Running in initrd. Jan 23 17:31:29.429095 systemd[1]: No hostname configured, using default hostname. Jan 23 17:31:29.429106 systemd[1]: Hostname set to . Jan 23 17:31:29.429115 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 17:31:29.429123 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:31:29.429132 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:31:29.429141 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:31:29.429151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:31:29.429161 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:31:29.429170 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:31:29.429180 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:31:29.429189 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:31:29.429197 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:31:29.429206 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:31:29.429216 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:31:29.429225 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:31:29.429234 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:31:29.429243 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:31:29.429251 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:31:29.429260 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:31:29.429269 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:31:29.429279 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:31:29.429288 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:31:29.429297 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:31:29.429305 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:31:29.429314 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:31:29.429323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:31:29.429332 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:31:29.429342 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:31:29.429351 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:31:29.429360 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:31:29.429369 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:31:29.429378 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:31:29.429387 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:31:29.429397 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:31:29.429406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:31:29.429416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:31:29.429424 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:31:29.429435 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:31:29.429444 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:31:29.429453 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:31:29.429483 systemd-journald[416]: Collecting audit messages is enabled. Jan 23 17:31:29.429506 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:31:29.429514 kernel: Bridge firewalling registered Jan 23 17:31:29.429523 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:31:29.429532 kernel: audit: type=1130 audit(1769189489.365:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.429541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:31:29.429552 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:31:29.429561 kernel: audit: type=1130 audit(1769189489.374:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.429570 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:31:29.429579 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:29.429588 kernel: audit: type=1130 audit(1769189489.387:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.429597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:31:29.429606 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:31:29.429617 kernel: audit: type=1130 audit(1769189489.397:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.429626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:31:29.429635 kernel: audit: type=1130 audit(1769189489.401:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.429644 kernel: audit: type=1334 audit(1769189489.410:7): prog-id=6 op=LOAD Jan 23 17:31:29.429653 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:31:29.429662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:31:29.429673 systemd-journald[416]: Journal started Jan 23 17:31:29.429691 systemd-journald[416]: Runtime Journal (/run/log/journal/4fa42b73305646c5ab7089141557ee6a) is 8M, max 319.5M, 311.5M free. Jan 23 17:31:29.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.410000 audit: BPF prog-id=6 op=LOAD Jan 23 17:31:29.363818 systemd-modules-load[417]: Inserted module 'br_netfilter' Jan 23 17:31:29.434278 kernel: audit: type=1130 audit(1769189489.430:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.434305 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:31:29.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.437768 kernel: audit: type=1130 audit(1769189489.434:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.438898 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:31:29.440191 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:31:29.450460 systemd-tmpfiles[454]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:31:29.454079 systemd-resolved[433]: Positive Trust Anchors: Jan 23 17:31:29.454094 systemd-resolved[433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:31:29.454097 systemd-resolved[433]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 17:31:29.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.454129 systemd-resolved[433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:31:29.468637 kernel: audit: type=1130 audit(1769189489.456:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.468671 dracut-cmdline[453]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=35f959b0e84cd72dec35dcaa9fdae098b059a7436b8ff34bc604c87ac6375079 Jan 23 17:31:29.454932 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:31:29.480997 systemd-resolved[433]: Defaulting to hostname 'linux'. Jan 23 17:31:29.481824 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:31:29.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.482776 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:31:29.543780 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:31:29.555790 kernel: iscsi: registered transport (tcp) Jan 23 17:31:29.569788 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:31:29.569831 kernel: QLogic iSCSI HBA Driver Jan 23 17:31:29.593412 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:31:29.613720 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:31:29.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.615748 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:31:29.662295 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:31:29.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.664655 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:31:29.666337 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:31:29.702507 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:31:29.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.703000 audit: BPF prog-id=7 op=LOAD Jan 23 17:31:29.703000 audit: BPF prog-id=8 op=LOAD Jan 23 17:31:29.705073 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:31:29.739982 systemd-udevd[695]: Using default interface naming scheme 'v257'. Jan 23 17:31:29.748272 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:31:29.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.751215 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:31:29.773318 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:31:29.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.774000 audit: BPF prog-id=9 op=LOAD Jan 23 17:31:29.776395 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:31:29.780232 dracut-pre-trigger[769]: rd.md=0: removing MD RAID activation Jan 23 17:31:29.805834 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:31:29.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.807878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:31:29.817569 systemd-networkd[802]: lo: Link UP Jan 23 17:31:29.817578 systemd-networkd[802]: lo: Gained carrier Jan 23 17:31:29.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.818043 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:31:29.819232 systemd[1]: Reached target network.target - Network. Jan 23 17:31:29.892340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:31:29.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:29.896677 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:31:29.942715 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 17:31:29.952281 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 17:31:29.974259 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 17:31:29.976935 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:31:29.991941 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 17:31:29.996673 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 23 17:31:29.996696 disk-uuid[872]: Primary Header is updated. Jan 23 17:31:29.996696 disk-uuid[872]: Secondary Entries is updated. Jan 23 17:31:29.996696 disk-uuid[872]: Secondary Header is updated. Jan 23 17:31:30.004768 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 17:31:30.008548 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 23 17:31:30.042349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:31:30.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:30.042500 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:30.044301 systemd-networkd[802]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:31:30.044305 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:31:30.045490 systemd-networkd[802]: eth0: Link UP Jan 23 17:31:30.045655 systemd-networkd[802]: eth0: Gained carrier Jan 23 17:31:30.045666 systemd-networkd[802]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:31:30.045985 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:31:30.050003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:31:30.065181 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 23 17:31:30.065404 kernel: usbcore: registered new interface driver usbhid Jan 23 17:31:30.065971 kernel: usbhid: USB HID core driver Jan 23 17:31:30.078343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:30.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:30.112662 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:31:30.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:30.114098 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:31:30.115636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:31:30.117926 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:31:30.120441 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:31:30.122106 systemd-networkd[802]: eth0: DHCPv4 address 10.0.6.147/25, gateway 10.0.6.129 acquired from 10.0.6.129 Jan 23 17:31:30.143812 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:31:30.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:31.026183 disk-uuid[873]: Warning: The kernel is still using the old partition table. Jan 23 17:31:31.026183 disk-uuid[873]: The new table will be used at the next reboot or after you Jan 23 17:31:31.026183 disk-uuid[873]: run partprobe(8) or kpartx(8) Jan 23 17:31:31.026183 disk-uuid[873]: The operation has completed successfully. Jan 23 17:31:31.034729 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:31:31.034864 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:31:31.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:31.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:31.037682 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:31:31.077800 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (906) Jan 23 17:31:31.079591 kernel: BTRFS info (device vda6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:31:31.079618 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:31:31.083893 kernel: BTRFS info (device vda6): turning on async discard Jan 23 17:31:31.083928 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 17:31:31.089737 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:31:31.091554 kernel: BTRFS info (device vda6): last unmount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:31:31.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:31.091782 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:31:31.228263 ignition[925]: Ignition 2.24.0 Jan 23 17:31:31.228282 ignition[925]: Stage: fetch-offline Jan 23 17:31:31.228317 ignition[925]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:31.228326 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:31:31.228480 ignition[925]: parsed url from cmdline: "" Jan 23 17:31:31.228483 ignition[925]: no config URL provided Jan 23 17:31:31.228487 ignition[925]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:31:31.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:31.231380 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:31:31.228494 ignition[925]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:31:31.233523 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:31:31.228499 ignition[925]: failed to fetch config: resource requires networking Jan 23 17:31:31.228724 ignition[925]: Ignition finished successfully Jan 23 17:31:31.260646 ignition[938]: Ignition 2.24.0 Jan 23 17:31:31.260667 ignition[938]: Stage: fetch Jan 23 17:31:31.260832 ignition[938]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:31.260842 ignition[938]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:31:31.260923 ignition[938]: parsed url from cmdline: "" Jan 23 17:31:31.260926 ignition[938]: no config URL provided Jan 23 17:31:31.260930 ignition[938]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:31:31.260935 ignition[938]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:31:31.261259 ignition[938]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 17:31:31.261275 ignition[938]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 17:31:31.261588 ignition[938]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 17:31:31.365914 systemd-networkd[802]: eth0: Gained IPv6LL Jan 23 17:31:32.101105 ignition[938]: GET result: OK Jan 23 17:31:32.101380 ignition[938]: parsing config with SHA512: 986cb70018cf299870c937db6125df42180f007b5727aaf6e99116a09512530292df0541eee66f7be70966c3561a8bf29e36a9e5023368a27f5d82461c7ea30e Jan 23 17:31:32.106310 unknown[938]: fetched base config from "system" Jan 23 17:31:32.106320 unknown[938]: fetched base config from "system" Jan 23 17:31:32.106644 ignition[938]: fetch: fetch complete Jan 23 17:31:32.106324 unknown[938]: fetched user config from "openstack" Jan 23 17:31:32.106648 ignition[938]: fetch: fetch passed Jan 23 17:31:32.113744 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 23 17:31:32.113795 kernel: audit: type=1130 audit(1769189492.110:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.106690 ignition[938]: Ignition finished successfully Jan 23 17:31:32.110347 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:31:32.112451 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:31:32.143806 ignition[947]: Ignition 2.24.0 Jan 23 17:31:32.143818 ignition[947]: Stage: kargs Jan 23 17:31:32.143966 ignition[947]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:32.143974 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:31:32.144707 ignition[947]: kargs: kargs passed Jan 23 17:31:32.144749 ignition[947]: Ignition finished successfully Jan 23 17:31:32.147648 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:31:32.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.151223 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:31:32.152676 kernel: audit: type=1130 audit(1769189492.147:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.182025 ignition[954]: Ignition 2.24.0 Jan 23 17:31:32.182043 ignition[954]: Stage: disks Jan 23 17:31:32.182186 ignition[954]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:32.182195 ignition[954]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:31:32.184771 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:31:32.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.182922 ignition[954]: disks: disks passed Jan 23 17:31:32.190363 kernel: audit: type=1130 audit(1769189492.186:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.186442 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:31:32.182965 ignition[954]: Ignition finished successfully Jan 23 17:31:32.189966 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:31:32.191281 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:31:32.192719 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:31:32.194032 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:31:32.196431 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:31:32.235446 systemd-fsck[963]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 23 17:31:32.237768 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:31:32.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.240275 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:31:32.243681 kernel: audit: type=1130 audit(1769189492.238:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.334805 kernel: EXT4-fs (vda9): mounted filesystem 6e8555bb-6998-46ec-8ba6-5a7a415f09ac r/w with ordered data mode. Quota mode: none. Jan 23 17:31:32.335779 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:31:32.336885 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:31:32.339918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:31:32.341456 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:31:32.342323 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 17:31:32.342915 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 17:31:32.345670 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:31:32.345703 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:31:32.358302 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:31:32.362220 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:31:32.368777 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (971) Jan 23 17:31:32.371847 kernel: BTRFS info (device vda6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:31:32.371881 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:31:32.377259 kernel: BTRFS info (device vda6): turning on async discard Jan 23 17:31:32.377326 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 17:31:32.379131 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:31:32.403809 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:32.506706 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:31:32.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.510542 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:31:32.512059 kernel: audit: type=1130 audit(1769189492.506:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.512002 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:31:32.526626 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:31:32.529767 kernel: BTRFS info (device vda6): last unmount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:31:32.543886 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:31:32.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.548779 kernel: audit: type=1130 audit(1769189492.543:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.555273 ignition[1073]: INFO : Ignition 2.24.0 Jan 23 17:31:32.555273 ignition[1073]: INFO : Stage: mount Jan 23 17:31:32.556606 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:32.556606 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:31:32.556606 ignition[1073]: INFO : mount: mount passed Jan 23 17:31:32.556606 ignition[1073]: INFO : Ignition finished successfully Jan 23 17:31:32.563544 kernel: audit: type=1130 audit(1769189492.559:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:32.558552 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:31:33.428776 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:35.433780 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:39.438779 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:39.443129 coreos-metadata[973]: Jan 23 17:31:39.442 WARN failed to locate config-drive, using the metadata service API instead Jan 23 17:31:39.461978 coreos-metadata[973]: Jan 23 17:31:39.461 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 17:31:40.112817 coreos-metadata[973]: Jan 23 17:31:40.112 INFO Fetch successful Jan 23 17:31:40.113959 coreos-metadata[973]: Jan 23 17:31:40.112 INFO wrote hostname ci-4547-1-0-4-2c8b61c80e to /sysroot/etc/hostname Jan 23 17:31:40.118898 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 17:31:40.120137 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 17:31:40.127897 kernel: audit: type=1130 audit(1769189500.121:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:40.127922 kernel: audit: type=1131 audit(1769189500.121:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:40.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:40.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:40.123629 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:31:40.144046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:31:40.182737 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1089) Jan 23 17:31:40.182800 kernel: BTRFS info (device vda6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:31:40.183762 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:31:40.187983 kernel: BTRFS info (device vda6): turning on async discard Jan 23 17:31:40.188022 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 17:31:40.189397 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:31:40.216598 ignition[1107]: INFO : Ignition 2.24.0 Jan 23 17:31:40.216598 ignition[1107]: INFO : Stage: files Jan 23 17:31:40.218339 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:40.218339 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:31:40.218339 ignition[1107]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:31:40.221631 ignition[1107]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:31:40.221631 ignition[1107]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:31:40.224315 ignition[1107]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:31:40.224315 ignition[1107]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:31:40.224315 ignition[1107]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:31:40.224039 unknown[1107]: wrote ssh authorized keys file for user: core Jan 23 17:31:40.228959 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:31:40.228959 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 17:31:40.274953 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:31:40.382412 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:31:40.382412 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:31:40.385691 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:31:40.403895 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 17:31:40.658022 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 17:31:42.055955 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:31:42.055955 ignition[1107]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 17:31:42.059566 ignition[1107]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:31:42.063423 ignition[1107]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:31:42.063423 ignition[1107]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 17:31:42.063423 ignition[1107]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:31:42.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.071006 ignition[1107]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:31:42.071006 ignition[1107]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:31:42.071006 ignition[1107]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:31:42.071006 ignition[1107]: INFO : files: files passed Jan 23 17:31:42.071006 ignition[1107]: INFO : Ignition finished successfully Jan 23 17:31:42.077473 kernel: audit: type=1130 audit(1769189502.066:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.066911 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:31:42.069913 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:31:42.072445 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:31:42.086220 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:31:42.087785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:31:42.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.093742 kernel: audit: type=1130 audit(1769189502.087:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.093793 kernel: audit: type=1131 audit(1769189502.087:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.097352 initrd-setup-root-after-ignition[1140]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:31:42.097352 initrd-setup-root-after-ignition[1140]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:31:42.100533 initrd-setup-root-after-ignition[1144]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:31:42.101208 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:31:42.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.103120 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:31:42.108719 kernel: audit: type=1130 audit(1769189502.102:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.108469 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:31:42.140223 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:31:42.140354 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:31:42.146803 kernel: audit: type=1130 audit(1769189502.141:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.146840 kernel: audit: type=1131 audit(1769189502.141:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.142143 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:31:42.147564 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:31:42.149235 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:31:42.150165 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:31:42.165152 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:31:42.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.167378 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:31:42.170902 kernel: audit: type=1130 audit(1769189502.165:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.188915 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:31:42.189045 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:31:42.191399 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:31:42.193028 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:31:42.194664 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:31:42.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.194798 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:31:42.199798 kernel: audit: type=1131 audit(1769189502.196:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.199223 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:31:42.200594 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:31:42.201962 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:31:42.203340 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:31:42.204868 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:31:42.206585 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:31:42.208164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:31:42.209957 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:31:42.211594 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:31:42.213700 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:31:42.215158 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:31:42.216397 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:31:42.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.216530 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:31:42.218410 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:31:42.219987 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:31:42.221568 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:31:42.224931 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:31:42.226417 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:31:42.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.226533 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:31:42.229218 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:31:42.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.229342 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:31:42.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.230872 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:31:42.230977 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:31:42.233346 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:31:42.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.234713 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:31:42.234865 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:31:42.237314 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:31:42.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.238428 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:31:42.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.238579 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:31:42.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.240343 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:31:42.240451 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:31:42.241775 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:31:42.241884 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:31:42.247030 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:31:42.254908 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:31:42.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.267087 ignition[1165]: INFO : Ignition 2.24.0 Jan 23 17:31:42.267087 ignition[1165]: INFO : Stage: umount Jan 23 17:31:42.269623 ignition[1165]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:31:42.269623 ignition[1165]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:31:42.269623 ignition[1165]: INFO : umount: umount passed Jan 23 17:31:42.269623 ignition[1165]: INFO : Ignition finished successfully Jan 23 17:31:42.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.269054 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:31:42.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.271171 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:31:42.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.271308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:31:42.272662 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:31:42.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.272703 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:31:42.273781 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:31:42.273820 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:31:42.275393 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:31:42.275434 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:31:42.276717 systemd[1]: Stopped target network.target - Network. Jan 23 17:31:42.278356 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:31:42.278410 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:31:42.279823 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:31:42.281973 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:31:42.286182 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:31:42.287350 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:31:42.288657 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:31:42.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.290248 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:31:42.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.290291 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:31:42.291591 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:31:42.291616 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:31:42.293207 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 17:31:42.293228 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:31:42.295480 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:31:42.295544 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:31:42.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.296830 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:31:42.296870 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:31:42.299148 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:31:42.300419 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:31:42.307099 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:31:42.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.313000 audit: BPF prog-id=6 op=UNLOAD Jan 23 17:31:42.307214 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:31:42.310230 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:31:42.311851 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:31:42.315000 audit: BPF prog-id=9 op=UNLOAD Jan 23 17:31:42.316098 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:31:42.318658 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:31:42.318712 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:31:42.321696 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:31:42.323270 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:31:42.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.323336 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:31:42.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.324882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:31:42.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.324924 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:31:42.326371 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:31:42.326410 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:31:42.328567 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:31:42.343827 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:31:42.344650 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:31:42.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.347370 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:31:42.347448 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:31:42.348577 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:31:42.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.348610 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:31:42.350523 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:31:42.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.350583 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:31:42.352875 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:31:42.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.352931 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:31:42.355095 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:31:42.355143 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:31:42.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.358228 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:31:42.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.359154 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:31:42.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.359207 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:31:42.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.360746 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:31:42.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.360822 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:31:42.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.362696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:31:42.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:42.362738 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:42.365144 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:31:42.365247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:31:42.366204 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:31:42.366293 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:31:42.368353 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:31:42.368437 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:31:42.370606 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:31:42.371555 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:31:42.371635 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:31:42.373415 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:31:42.397261 systemd[1]: Switching root. Jan 23 17:31:42.439087 systemd-journald[416]: Journal stopped Jan 23 17:31:43.280184 systemd-journald[416]: Received SIGTERM from PID 1 (systemd). Jan 23 17:31:43.280266 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:31:43.280288 kernel: SELinux: policy capability open_perms=1 Jan 23 17:31:43.280302 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:31:43.280320 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:31:43.280330 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:31:43.280346 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:31:43.280357 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:31:43.280367 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:31:43.280376 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:31:43.280389 systemd[1]: Successfully loaded SELinux policy in 64.624ms. Jan 23 17:31:43.280410 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.669ms. Jan 23 17:31:43.280422 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:31:43.280436 systemd[1]: Detected virtualization kvm. Jan 23 17:31:43.280449 systemd[1]: Detected architecture arm64. Jan 23 17:31:43.280460 systemd[1]: Detected first boot. Jan 23 17:31:43.280470 systemd[1]: Hostname set to . Jan 23 17:31:43.280482 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 17:31:43.280494 zram_generator::config[1210]: No configuration found. Jan 23 17:31:43.280507 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:31:43.280518 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:31:43.280529 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:31:43.280540 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:31:43.280551 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:31:43.280563 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:31:43.280574 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:31:43.280585 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:31:43.280596 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:31:43.280607 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:31:43.280618 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:31:43.280629 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:31:43.280641 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:31:43.280652 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:31:43.280662 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:31:43.280673 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:31:43.280690 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:31:43.280701 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:31:43.280714 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:31:43.280726 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 17:31:43.280737 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:31:43.280747 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:31:43.280861 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:31:43.280875 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:31:43.280892 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:31:43.280903 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:31:43.280914 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:31:43.280924 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:31:43.280935 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 17:31:43.280946 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:31:43.280956 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:31:43.280969 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:31:43.280980 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:31:43.280992 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:31:43.281005 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:31:43.281016 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 17:31:43.281027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:31:43.281038 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 17:31:43.281051 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 17:31:43.281061 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:31:43.281072 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:31:43.281083 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:31:43.281094 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:31:43.281104 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:31:43.281115 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:31:43.281127 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:31:43.281138 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:31:43.281148 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:31:43.281160 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:31:43.281171 systemd[1]: Reached target machines.target - Containers. Jan 23 17:31:43.281182 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:31:43.281193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:43.281204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:31:43.281215 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:31:43.281229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:31:43.281240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:31:43.281253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:31:43.281264 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:31:43.281275 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:31:43.281286 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:31:43.281297 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:31:43.281308 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:31:43.281319 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:31:43.281331 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:31:43.281343 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:43.281355 kernel: fuse: init (API version 7.41) Jan 23 17:31:43.281366 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:31:43.281377 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:31:43.281387 kernel: ACPI: bus type drm_connector registered Jan 23 17:31:43.281398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:31:43.281410 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:31:43.281421 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:31:43.281434 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:31:43.281446 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:31:43.281457 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:31:43.281467 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:31:43.281501 systemd-journald[1277]: Collecting audit messages is enabled. Jan 23 17:31:43.281527 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:31:43.281538 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:31:43.281548 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:31:43.281559 systemd-journald[1277]: Journal started Jan 23 17:31:43.281581 systemd-journald[1277]: Runtime Journal (/run/log/journal/4fa42b73305646c5ab7089141557ee6a) is 8M, max 319.5M, 311.5M free. Jan 23 17:31:43.141000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 17:31:43.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.231000 audit: BPF prog-id=14 op=UNLOAD Jan 23 17:31:43.231000 audit: BPF prog-id=13 op=UNLOAD Jan 23 17:31:43.232000 audit: BPF prog-id=15 op=LOAD Jan 23 17:31:43.232000 audit: BPF prog-id=16 op=LOAD Jan 23 17:31:43.232000 audit: BPF prog-id=17 op=LOAD Jan 23 17:31:43.277000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 17:31:43.277000 audit[1277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff3c7fb80 a2=4000 a3=0 items=0 ppid=1 pid=1277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:43.277000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 17:31:43.053686 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:31:43.078061 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 17:31:43.078467 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:31:43.283849 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:31:43.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.285819 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:31:43.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.286809 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:31:43.286993 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:31:43.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.288356 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:31:43.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.289639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:31:43.289832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:31:43.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.291033 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:31:43.291185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:31:43.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.292336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:31:43.292499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:31:43.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.293995 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:31:43.294139 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:31:43.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.295366 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:31:43.295520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:31:43.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.297004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:31:43.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.299889 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:31:43.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.301925 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:31:43.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.303288 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:31:43.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.315333 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:31:43.317123 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 17:31:43.319170 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:31:43.321091 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:31:43.322034 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:31:43.322064 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:31:43.323779 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:31:43.324912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:43.325024 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:43.339911 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:31:43.341795 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:31:43.342848 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:31:43.346904 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:31:43.347838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:31:43.348865 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:31:43.350967 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:31:43.354419 systemd-journald[1277]: Time spent on flushing to /var/log/journal/4fa42b73305646c5ab7089141557ee6a is 47.449ms for 1810 entries. Jan 23 17:31:43.354419 systemd-journald[1277]: System Journal (/var/log/journal/4fa42b73305646c5ab7089141557ee6a) is 8M, max 588.1M, 580.1M free. Jan 23 17:31:43.428646 systemd-journald[1277]: Received client request to flush runtime journal. Jan 23 17:31:43.428700 kernel: loop1: detected capacity change from 0 to 100192 Jan 23 17:31:43.428725 kernel: loop2: detected capacity change from 0 to 45344 Jan 23 17:31:43.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.402000 audit: BPF prog-id=18 op=LOAD Jan 23 17:31:43.403000 audit: BPF prog-id=19 op=LOAD Jan 23 17:31:43.403000 audit: BPF prog-id=20 op=LOAD Jan 23 17:31:43.409000 audit: BPF prog-id=21 op=LOAD Jan 23 17:31:43.417000 audit: BPF prog-id=22 op=LOAD Jan 23 17:31:43.423000 audit: BPF prog-id=23 op=LOAD Jan 23 17:31:43.423000 audit: BPF prog-id=24 op=LOAD Jan 23 17:31:43.426000 audit: BPF prog-id=25 op=LOAD Jan 23 17:31:43.426000 audit: BPF prog-id=26 op=LOAD Jan 23 17:31:43.426000 audit: BPF prog-id=27 op=LOAD Jan 23 17:31:43.352963 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:31:43.354993 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:31:43.356603 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:31:43.360254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:31:43.372505 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:31:43.374018 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:31:43.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.376207 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:31:43.378955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:31:43.396815 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:31:43.406022 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 17:31:43.411180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:31:43.413030 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:31:43.424661 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 17:31:43.427947 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:31:43.431119 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:31:43.437145 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:31:43.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.457468 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jan 23 17:31:43.457484 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jan 23 17:31:43.461400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:31:43.462019 kernel: loop3: detected capacity change from 0 to 1648 Jan 23 17:31:43.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.463945 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:31:43.471282 systemd-nsresourced[1346]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 17:31:43.473574 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 17:31:43.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.494803 kernel: loop4: detected capacity change from 0 to 211168 Jan 23 17:31:43.514659 systemd-oomd[1342]: No swap; memory pressure usage will be degraded Jan 23 17:31:43.515111 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 17:31:43.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.521980 systemd-resolved[1343]: Positive Trust Anchors: Jan 23 17:31:43.522000 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:31:43.522004 systemd-resolved[1343]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 17:31:43.522035 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:31:43.530711 systemd-resolved[1343]: Using system hostname 'ci-4547-1-0-4-2c8b61c80e'. Jan 23 17:31:43.532092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:31:43.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.533145 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:31:43.541797 kernel: loop5: detected capacity change from 0 to 100192 Jan 23 17:31:43.551780 kernel: loop6: detected capacity change from 0 to 45344 Jan 23 17:31:43.562784 kernel: loop7: detected capacity change from 0 to 1648 Jan 23 17:31:43.567777 kernel: loop1: detected capacity change from 0 to 211168 Jan 23 17:31:43.579549 (sd-merge)[1375]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 23 17:31:43.582376 (sd-merge)[1375]: Merged extensions into '/usr'. Jan 23 17:31:43.586869 systemd[1]: Reload requested from client PID 1330 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:31:43.586886 systemd[1]: Reloading... Jan 23 17:31:43.637836 zram_generator::config[1408]: No configuration found. Jan 23 17:31:43.783586 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:31:43.783833 systemd[1]: Reloading finished in 196 ms. Jan 23 17:31:43.807003 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:31:43.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.808282 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:31:43.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.821387 systemd[1]: Starting ensure-sysext.service... Jan 23 17:31:43.823000 audit: BPF prog-id=8 op=UNLOAD Jan 23 17:31:43.823000 audit: BPF prog-id=7 op=UNLOAD Jan 23 17:31:43.823000 audit: BPF prog-id=28 op=LOAD Jan 23 17:31:43.823000 audit: BPF prog-id=29 op=LOAD Jan 23 17:31:43.823036 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:31:43.825149 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:31:43.828000 audit: BPF prog-id=30 op=LOAD Jan 23 17:31:43.828000 audit: BPF prog-id=22 op=UNLOAD Jan 23 17:31:43.828000 audit: BPF prog-id=31 op=LOAD Jan 23 17:31:43.828000 audit: BPF prog-id=32 op=LOAD Jan 23 17:31:43.828000 audit: BPF prog-id=23 op=UNLOAD Jan 23 17:31:43.828000 audit: BPF prog-id=24 op=UNLOAD Jan 23 17:31:43.829000 audit: BPF prog-id=33 op=LOAD Jan 23 17:31:43.829000 audit: BPF prog-id=18 op=UNLOAD Jan 23 17:31:43.829000 audit: BPF prog-id=34 op=LOAD Jan 23 17:31:43.829000 audit: BPF prog-id=35 op=LOAD Jan 23 17:31:43.829000 audit: BPF prog-id=19 op=UNLOAD Jan 23 17:31:43.829000 audit: BPF prog-id=20 op=UNLOAD Jan 23 17:31:43.830000 audit: BPF prog-id=36 op=LOAD Jan 23 17:31:43.830000 audit: BPF prog-id=15 op=UNLOAD Jan 23 17:31:43.830000 audit: BPF prog-id=37 op=LOAD Jan 23 17:31:43.830000 audit: BPF prog-id=38 op=LOAD Jan 23 17:31:43.830000 audit: BPF prog-id=16 op=UNLOAD Jan 23 17:31:43.830000 audit: BPF prog-id=17 op=UNLOAD Jan 23 17:31:43.830000 audit: BPF prog-id=39 op=LOAD Jan 23 17:31:43.830000 audit: BPF prog-id=25 op=UNLOAD Jan 23 17:31:43.830000 audit: BPF prog-id=40 op=LOAD Jan 23 17:31:43.830000 audit: BPF prog-id=41 op=LOAD Jan 23 17:31:43.830000 audit: BPF prog-id=26 op=UNLOAD Jan 23 17:31:43.831000 audit: BPF prog-id=27 op=UNLOAD Jan 23 17:31:43.831000 audit: BPF prog-id=42 op=LOAD Jan 23 17:31:43.831000 audit: BPF prog-id=21 op=UNLOAD Jan 23 17:31:43.835957 systemd[1]: Reload requested from client PID 1442 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:31:43.835976 systemd[1]: Reloading... Jan 23 17:31:43.838351 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:31:43.838385 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:31:43.838594 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:31:43.839533 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Jan 23 17:31:43.839587 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Jan 23 17:31:43.845002 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:31:43.845018 systemd-tmpfiles[1443]: Skipping /boot Jan 23 17:31:43.850729 systemd-udevd[1444]: Using default interface naming scheme 'v257'. Jan 23 17:31:43.851064 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:31:43.851082 systemd-tmpfiles[1443]: Skipping /boot Jan 23 17:31:43.884789 zram_generator::config[1472]: No configuration found. Jan 23 17:31:43.997784 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 17:31:44.029337 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Jan 23 17:31:44.029417 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 17:31:44.029433 kernel: [drm] features: -context_init Jan 23 17:31:44.030825 kernel: [drm] number of scanouts: 1 Jan 23 17:31:44.030889 kernel: [drm] number of cap sets: 0 Jan 23 17:31:44.032807 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Jan 23 17:31:44.035782 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 17:31:44.050782 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 17:31:44.082580 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 17:31:44.082675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 17:31:44.084068 systemd[1]: Reloading finished in 247 ms. Jan 23 17:31:44.094530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:31:44.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.096993 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:31:44.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.110000 audit: BPF prog-id=43 op=LOAD Jan 23 17:31:44.110000 audit: BPF prog-id=36 op=UNLOAD Jan 23 17:31:44.110000 audit: BPF prog-id=44 op=LOAD Jan 23 17:31:44.111000 audit: BPF prog-id=45 op=LOAD Jan 23 17:31:44.111000 audit: BPF prog-id=37 op=UNLOAD Jan 23 17:31:44.111000 audit: BPF prog-id=38 op=UNLOAD Jan 23 17:31:44.111000 audit: BPF prog-id=46 op=LOAD Jan 23 17:31:44.111000 audit: BPF prog-id=47 op=LOAD Jan 23 17:31:44.111000 audit: BPF prog-id=28 op=UNLOAD Jan 23 17:31:44.111000 audit: BPF prog-id=29 op=UNLOAD Jan 23 17:31:44.111000 audit: BPF prog-id=48 op=LOAD Jan 23 17:31:44.112000 audit: BPF prog-id=39 op=UNLOAD Jan 23 17:31:44.112000 audit: BPF prog-id=49 op=LOAD Jan 23 17:31:44.112000 audit: BPF prog-id=50 op=LOAD Jan 23 17:31:44.112000 audit: BPF prog-id=40 op=UNLOAD Jan 23 17:31:44.112000 audit: BPF prog-id=41 op=UNLOAD Jan 23 17:31:44.112000 audit: BPF prog-id=51 op=LOAD Jan 23 17:31:44.112000 audit: BPF prog-id=30 op=UNLOAD Jan 23 17:31:44.112000 audit: BPF prog-id=52 op=LOAD Jan 23 17:31:44.112000 audit: BPF prog-id=53 op=LOAD Jan 23 17:31:44.112000 audit: BPF prog-id=31 op=UNLOAD Jan 23 17:31:44.112000 audit: BPF prog-id=32 op=UNLOAD Jan 23 17:31:44.113000 audit: BPF prog-id=54 op=LOAD Jan 23 17:31:44.113000 audit: BPF prog-id=33 op=UNLOAD Jan 23 17:31:44.113000 audit: BPF prog-id=55 op=LOAD Jan 23 17:31:44.113000 audit: BPF prog-id=56 op=LOAD Jan 23 17:31:44.113000 audit: BPF prog-id=34 op=UNLOAD Jan 23 17:31:44.113000 audit: BPF prog-id=35 op=UNLOAD Jan 23 17:31:44.114000 audit: BPF prog-id=57 op=LOAD Jan 23 17:31:44.116000 audit: BPF prog-id=42 op=UNLOAD Jan 23 17:31:44.157201 systemd[1]: Finished ensure-sysext.service. Jan 23 17:31:44.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.162622 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:31:44.164422 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:31:44.165534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:44.166469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:31:44.177943 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:31:44.179965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:31:44.182045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:31:44.184605 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 23 17:31:44.185870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:44.185975 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:44.186945 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:31:44.189950 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:31:44.191359 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:44.193951 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:31:44.196000 audit: BPF prog-id=58 op=LOAD Jan 23 17:31:44.198675 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:31:44.199826 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:31:44.202059 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:31:44.204293 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:31:44.206290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:31:44.211107 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 17:31:44.211260 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 17:31:44.213175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:31:44.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.214785 kernel: PTP clock support registered Jan 23 17:31:44.215185 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:31:44.215596 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:31:44.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.217387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:31:44.217731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:31:44.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.220546 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:31:44.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.220903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:31:44.222715 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 23 17:31:44.223241 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 23 17:31:44.225000 audit[1581]: SYSTEM_BOOT pid=1581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.226741 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:31:44.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.243996 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:31:44.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.248490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:31:44.248647 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:31:44.251841 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:31:44.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:44.259000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 17:31:44.259000 audit[1609]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd143a550 a2=420 a3=0 items=0 ppid=1564 pid=1609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:44.259000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:31:44.260658 augenrules[1609]: No rules Jan 23 17:31:44.262981 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:31:44.263256 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:31:44.283087 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:31:44.284276 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:31:44.290373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:44.292366 systemd-networkd[1580]: lo: Link UP Jan 23 17:31:44.292376 systemd-networkd[1580]: lo: Gained carrier Jan 23 17:31:44.293799 systemd-networkd[1580]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:31:44.293859 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:31:44.293934 systemd-networkd[1580]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:31:44.295202 systemd-networkd[1580]: eth0: Link UP Jan 23 17:31:44.295281 systemd[1]: Reached target network.target - Network. Jan 23 17:31:44.295459 systemd-networkd[1580]: eth0: Gained carrier Jan 23 17:31:44.295473 systemd-networkd[1580]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:31:44.297403 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:31:44.299616 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:31:44.312025 systemd-networkd[1580]: eth0: DHCPv4 address 10.0.6.147/25, gateway 10.0.6.129 acquired from 10.0.6.129 Jan 23 17:31:44.323252 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:31:44.622078 ldconfig[1572]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:31:44.626380 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:31:44.628654 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:31:44.657471 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:31:44.658683 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:31:44.659737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:31:44.660696 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:31:44.662011 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:31:44.662914 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:31:44.663911 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 17:31:44.664980 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 17:31:44.665839 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:31:44.666788 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:31:44.666820 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:31:44.667502 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:31:44.669347 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:31:44.671527 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:31:44.674258 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:31:44.675442 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:31:44.676503 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:31:44.695090 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:31:44.696277 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:31:44.697872 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:31:44.698804 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:31:44.699545 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:31:44.700377 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:31:44.700406 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:31:44.702845 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 17:31:44.704539 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:31:44.706612 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:31:44.709961 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:31:44.711610 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:31:44.714780 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:44.715122 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:31:44.717084 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:31:44.717934 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:31:44.719068 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:31:44.720841 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:31:44.726993 jq[1632]: false Jan 23 17:31:44.726068 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:31:44.730003 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:31:44.737958 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:31:44.738814 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:31:44.739343 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:31:44.740020 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:31:44.741742 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:31:44.745799 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:31:44.748292 extend-filesystems[1635]: Found /dev/vda6 Jan 23 17:31:44.747244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:31:44.747459 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:31:44.750167 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:31:44.750393 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:31:44.758436 extend-filesystems[1635]: Found /dev/vda9 Jan 23 17:31:44.760348 jq[1648]: true Jan 23 17:31:44.762219 chronyd[1627]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 17:31:44.763878 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:31:44.764227 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:31:44.765874 extend-filesystems[1635]: Checking size of /dev/vda9 Jan 23 17:31:44.767316 chronyd[1627]: Loaded seccomp filter (level 2) Jan 23 17:31:44.767688 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 17:31:44.781127 extend-filesystems[1635]: Resized partition /dev/vda9 Jan 23 17:31:44.785038 tar[1654]: linux-arm64/LICENSE Jan 23 17:31:44.785268 tar[1654]: linux-arm64/helm Jan 23 17:31:44.788823 extend-filesystems[1680]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:31:44.789831 update_engine[1646]: I20260123 17:31:44.788056 1646 main.cc:92] Flatcar Update Engine starting Jan 23 17:31:44.789996 jq[1665]: true Jan 23 17:31:44.798780 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 23 17:31:44.800891 dbus-daemon[1630]: [system] SELinux support is enabled Jan 23 17:31:44.801208 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:31:44.804846 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:31:44.804882 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:31:44.806369 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:31:44.806431 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:31:44.807789 update_engine[1646]: I20260123 17:31:44.807718 1646 update_check_scheduler.cc:74] Next update check in 5m19s Jan 23 17:31:44.808249 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:31:44.813044 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:31:44.870739 systemd-logind[1643]: New seat seat0. Jan 23 17:31:44.897724 locksmithd[1685]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:31:44.909604 systemd-logind[1643]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:31:44.909635 systemd-logind[1643]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 23 17:31:44.910002 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:31:44.942321 bash[1699]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:31:44.943715 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:31:44.946400 containerd[1668]: time="2026-01-23T17:31:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:31:44.947987 systemd[1]: Starting sshkeys.service... Jan 23 17:31:44.949868 containerd[1668]: time="2026-01-23T17:31:44.949825040Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 17:31:44.966722 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:31:44.968046 containerd[1668]: time="2026-01-23T17:31:44.967999400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.08µs" Jan 23 17:31:44.968046 containerd[1668]: time="2026-01-23T17:31:44.968042640Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:31:44.968244 containerd[1668]: time="2026-01-23T17:31:44.968217720Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:31:44.968269 containerd[1668]: time="2026-01-23T17:31:44.968244520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:31:44.968417 containerd[1668]: time="2026-01-23T17:31:44.968393360Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:31:44.968608 containerd[1668]: time="2026-01-23T17:31:44.968417920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:31:44.969202 containerd[1668]: time="2026-01-23T17:31:44.969169760Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:31:44.969202 containerd[1668]: time="2026-01-23T17:31:44.969199120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:31:44.969594 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:31:44.970524 containerd[1668]: time="2026-01-23T17:31:44.970489360Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:31:44.970524 containerd[1668]: time="2026-01-23T17:31:44.970521160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:31:44.970589 containerd[1668]: time="2026-01-23T17:31:44.970533720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:31:44.970589 containerd[1668]: time="2026-01-23T17:31:44.970542840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 17:31:44.970766 containerd[1668]: time="2026-01-23T17:31:44.970738200Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 17:31:44.970797 containerd[1668]: time="2026-01-23T17:31:44.970774200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:31:44.970878 containerd[1668]: time="2026-01-23T17:31:44.970841960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:31:44.971041 containerd[1668]: time="2026-01-23T17:31:44.971015440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:31:44.971069 containerd[1668]: time="2026-01-23T17:31:44.971057400Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:31:44.971089 containerd[1668]: time="2026-01-23T17:31:44.971067720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:31:44.971106 containerd[1668]: time="2026-01-23T17:31:44.971091680Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:31:44.973761 containerd[1668]: time="2026-01-23T17:31:44.971323560Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:31:44.973761 containerd[1668]: time="2026-01-23T17:31:44.971383920Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:31:44.988790 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:44.998262 containerd[1668]: time="2026-01-23T17:31:44.998145400Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:31:44.998356 containerd[1668]: time="2026-01-23T17:31:44.998292240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 17:31:44.998502 containerd[1668]: time="2026-01-23T17:31:44.998473480Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 17:31:44.998528 containerd[1668]: time="2026-01-23T17:31:44.998500920Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:31:44.998528 containerd[1668]: time="2026-01-23T17:31:44.998518560Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:31:44.998583 containerd[1668]: time="2026-01-23T17:31:44.998531880Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:31:44.998583 containerd[1668]: time="2026-01-23T17:31:44.998544040Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:31:44.998651 containerd[1668]: time="2026-01-23T17:31:44.998626520Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:31:44.998678 containerd[1668]: time="2026-01-23T17:31:44.998663600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:31:44.998696 containerd[1668]: time="2026-01-23T17:31:44.998679360Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:31:44.998696 containerd[1668]: time="2026-01-23T17:31:44.998690320Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:31:44.998735 containerd[1668]: time="2026-01-23T17:31:44.998701800Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:31:44.998735 containerd[1668]: time="2026-01-23T17:31:44.998714240Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:31:44.998735 containerd[1668]: time="2026-01-23T17:31:44.998729480Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:31:44.999197 containerd[1668]: time="2026-01-23T17:31:44.999168480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:31:44.999221 containerd[1668]: time="2026-01-23T17:31:44.999206360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:31:44.999239 containerd[1668]: time="2026-01-23T17:31:44.999222840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:31:44.999302 containerd[1668]: time="2026-01-23T17:31:44.999282520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:31:44.999335 containerd[1668]: time="2026-01-23T17:31:44.999306760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:31:44.999335 containerd[1668]: time="2026-01-23T17:31:44.999319680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:31:44.999335 containerd[1668]: time="2026-01-23T17:31:44.999331400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:31:44.999387 containerd[1668]: time="2026-01-23T17:31:44.999343560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:31:44.999446 containerd[1668]: time="2026-01-23T17:31:44.999354760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:31:44.999476 containerd[1668]: time="2026-01-23T17:31:44.999449200Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:31:44.999476 containerd[1668]: time="2026-01-23T17:31:44.999464280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:31:44.999514 containerd[1668]: time="2026-01-23T17:31:44.999493560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:31:45.001757 containerd[1668]: time="2026-01-23T17:31:44.999531400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:31:45.001757 containerd[1668]: time="2026-01-23T17:31:44.999550480Z" level=info msg="Start snapshots syncer" Jan 23 17:31:45.001757 containerd[1668]: time="2026-01-23T17:31:44.999629360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:44.999992720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000048360Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000103160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000331000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000355720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000367120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000376840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000388160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000399480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000467240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000479200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000490800Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000539240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000600760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000615040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000625560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000633960Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000644200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000654600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000833160Z" level=info msg="runtime interface created" Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000843960Z" level=info msg="created NRI interface" Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000853440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000866080Z" level=info msg="Connect containerd service" Jan 23 17:31:45.001818 containerd[1668]: time="2026-01-23T17:31:45.000888480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:31:45.003447 containerd[1668]: time="2026-01-23T17:31:45.003412440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:31:45.077786 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087561000Z" level=info msg="Start subscribing containerd event" Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087636640Z" level=info msg="Start recovering state" Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087723640Z" level=info msg="Start event monitor" Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087735480Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087748960Z" level=info msg="Start streaming server" Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087772200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087779480Z" level=info msg="runtime interface starting up..." Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087785440Z" level=info msg="starting plugins..." Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087806600Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087932120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.087996760Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:31:45.096880 containerd[1668]: time="2026-01-23T17:31:45.088081240Z" level=info msg="containerd successfully booted in 0.142041s" Jan 23 17:31:45.088229 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:31:45.099315 extend-filesystems[1680]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 17:31:45.099315 extend-filesystems[1680]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 23 17:31:45.099315 extend-filesystems[1680]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 23 17:31:45.102746 extend-filesystems[1635]: Resized filesystem in /dev/vda9 Jan 23 17:31:45.101026 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:31:45.102811 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:31:45.163633 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:31:45.228363 tar[1654]: linux-arm64/README.md Jan 23 17:31:45.246066 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:31:45.591111 sshd_keygen[1671]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:31:45.610541 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:31:45.613555 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:31:45.615480 systemd[1]: Started sshd@0-10.0.6.147:22-4.153.228.146:42260.service - OpenSSH per-connection server daemon (4.153.228.146:42260). Jan 23 17:31:45.641077 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:31:45.641371 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:31:45.644332 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:31:45.666909 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:31:45.672042 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:31:45.674160 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 17:31:45.675532 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:31:45.731837 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:45.999830 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:46.166802 sshd[1747]: Accepted publickey for core from 4.153.228.146 port 42260 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:31:46.168524 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:46.180321 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:31:46.182347 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:31:46.186257 systemd-logind[1643]: New session 1 of user core. Jan 23 17:31:46.213094 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:31:46.216289 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:31:46.244137 (systemd)[1762]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:46.246749 systemd-logind[1643]: New session 2 of user core. Jan 23 17:31:46.277914 systemd-networkd[1580]: eth0: Gained IPv6LL Jan 23 17:31:46.281818 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:31:46.284265 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:31:46.287263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:46.289483 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:31:46.317395 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:31:46.374894 systemd[1762]: Queued start job for default target default.target. Jan 23 17:31:46.388005 systemd[1762]: Created slice app.slice - User Application Slice. Jan 23 17:31:46.388154 systemd[1762]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 17:31:46.388225 systemd[1762]: Reached target paths.target - Paths. Jan 23 17:31:46.388287 systemd[1762]: Reached target timers.target - Timers. Jan 23 17:31:46.389592 systemd[1762]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:31:46.390397 systemd[1762]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 17:31:46.399242 systemd[1762]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:31:46.399298 systemd[1762]: Reached target sockets.target - Sockets. Jan 23 17:31:46.402796 systemd[1762]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 17:31:46.402904 systemd[1762]: Reached target basic.target - Basic System. Jan 23 17:31:46.402962 systemd[1762]: Reached target default.target - Main User Target. Jan 23 17:31:46.402988 systemd[1762]: Startup finished in 151ms. Jan 23 17:31:46.403260 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:31:46.416335 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:31:46.726619 systemd[1]: Started sshd@1-10.0.6.147:22-4.153.228.146:35758.service - OpenSSH per-connection server daemon (4.153.228.146:35758). Jan 23 17:31:47.074135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:47.078272 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:47.262561 sshd[1788]: Accepted publickey for core from 4.153.228.146 port 35758 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:31:47.263979 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:47.268320 systemd-logind[1643]: New session 3 of user core. Jan 23 17:31:47.280283 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:31:47.552312 sshd[1803]: Connection closed by 4.153.228.146 port 35758 Jan 23 17:31:47.552607 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:47.556917 systemd[1]: sshd@1-10.0.6.147:22-4.153.228.146:35758.service: Deactivated successfully. Jan 23 17:31:47.560316 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:31:47.561173 systemd-logind[1643]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:31:47.562362 systemd-logind[1643]: Removed session 3. Jan 23 17:31:47.577624 kubelet[1796]: E0123 17:31:47.577560 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:47.579496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:47.579628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:47.579983 systemd[1]: kubelet.service: Consumed 763ms CPU time, 257.8M memory peak. Jan 23 17:31:47.669456 systemd[1]: Started sshd@2-10.0.6.147:22-4.153.228.146:35770.service - OpenSSH per-connection server daemon (4.153.228.146:35770). Jan 23 17:31:47.739786 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:48.011828 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:48.191224 sshd[1812]: Accepted publickey for core from 4.153.228.146 port 35770 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:31:48.192466 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:48.196181 systemd-logind[1643]: New session 4 of user core. Jan 23 17:31:48.203260 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:31:48.481248 sshd[1818]: Connection closed by 4.153.228.146 port 35770 Jan 23 17:31:48.481601 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:48.485575 systemd[1]: sshd@2-10.0.6.147:22-4.153.228.146:35770.service: Deactivated successfully. Jan 23 17:31:48.487198 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:31:48.487868 systemd-logind[1643]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:31:48.489183 systemd-logind[1643]: Removed session 4. Jan 23 17:31:51.753814 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:51.762107 coreos-metadata[1629]: Jan 23 17:31:51.761 WARN failed to locate config-drive, using the metadata service API instead Jan 23 17:31:52.025794 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:31:52.034592 coreos-metadata[1715]: Jan 23 17:31:52.031 WARN failed to locate config-drive, using the metadata service API instead Jan 23 17:31:52.410389 coreos-metadata[1715]: Jan 23 17:31:52.410 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 17:31:52.410764 coreos-metadata[1629]: Jan 23 17:31:52.410 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 17:31:53.913655 coreos-metadata[1715]: Jan 23 17:31:53.913 INFO Fetch successful Jan 23 17:31:53.913655 coreos-metadata[1715]: Jan 23 17:31:53.913 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 17:31:56.246884 coreos-metadata[1629]: Jan 23 17:31:56.246 INFO Fetch successful Jan 23 17:31:56.247214 coreos-metadata[1629]: Jan 23 17:31:56.247 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 17:31:57.499823 coreos-metadata[1715]: Jan 23 17:31:57.499 INFO Fetch successful Jan 23 17:31:57.501577 unknown[1715]: wrote ssh authorized keys file for user: core Jan 23 17:31:57.532117 update-ssh-keys[1832]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:31:57.533374 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:31:57.534480 systemd[1]: Finished sshkeys.service. Jan 23 17:31:57.830493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:31:57.832785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:57.962619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:57.966338 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:58.250788 kubelet[1843]: E0123 17:31:58.250658 1843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:58.254116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:58.254242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:58.254591 systemd[1]: kubelet.service: Consumed 139ms CPU time, 106.9M memory peak. Jan 23 17:31:58.592608 systemd[1]: Started sshd@3-10.0.6.147:22-4.153.228.146:44614.service - OpenSSH per-connection server daemon (4.153.228.146:44614). Jan 23 17:31:58.786830 coreos-metadata[1629]: Jan 23 17:31:58.786 INFO Fetch successful Jan 23 17:31:58.787147 coreos-metadata[1629]: Jan 23 17:31:58.786 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 17:31:59.119796 sshd[1852]: Accepted publickey for core from 4.153.228.146 port 44614 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:31:59.120922 sshd-session[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:59.125584 systemd-logind[1643]: New session 5 of user core. Jan 23 17:31:59.136228 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:31:59.409600 sshd[1856]: Connection closed by 4.153.228.146 port 44614 Jan 23 17:31:59.409958 sshd-session[1852]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:59.414177 systemd[1]: sshd@3-10.0.6.147:22-4.153.228.146:44614.service: Deactivated successfully. Jan 23 17:31:59.417191 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:31:59.417834 systemd-logind[1643]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:31:59.419052 systemd-logind[1643]: Removed session 5. Jan 23 17:31:59.445464 coreos-metadata[1629]: Jan 23 17:31:59.445 INFO Fetch successful Jan 23 17:31:59.445464 coreos-metadata[1629]: Jan 23 17:31:59.445 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 17:31:59.522365 systemd[1]: Started sshd@4-10.0.6.147:22-4.153.228.146:44628.service - OpenSSH per-connection server daemon (4.153.228.146:44628). Jan 23 17:32:00.052806 sshd[1862]: Accepted publickey for core from 4.153.228.146 port 44628 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:32:00.053967 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:32:00.058642 systemd-logind[1643]: New session 6 of user core. Jan 23 17:32:00.073211 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:32:00.119175 coreos-metadata[1629]: Jan 23 17:32:00.119 INFO Fetch successful Jan 23 17:32:00.119175 coreos-metadata[1629]: Jan 23 17:32:00.119 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 17:32:00.351464 sshd[1866]: Connection closed by 4.153.228.146 port 44628 Jan 23 17:32:00.352008 sshd-session[1862]: pam_unix(sshd:session): session closed for user core Jan 23 17:32:00.355860 systemd[1]: sshd@4-10.0.6.147:22-4.153.228.146:44628.service: Deactivated successfully. Jan 23 17:32:00.357818 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:32:00.361086 systemd-logind[1643]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:32:00.362021 systemd-logind[1643]: Removed session 6. Jan 23 17:32:00.775703 coreos-metadata[1629]: Jan 23 17:32:00.775 INFO Fetch successful Jan 23 17:32:00.775703 coreos-metadata[1629]: Jan 23 17:32:00.775 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 17:32:01.425079 coreos-metadata[1629]: Jan 23 17:32:01.424 INFO Fetch successful Jan 23 17:32:01.447746 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:32:01.448223 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:32:01.448347 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:32:01.451812 systemd[1]: Startup finished in 2.590s (kernel) + 13.450s (initrd) + 18.940s (userspace) = 34.982s. Jan 23 17:32:08.504806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:32:08.506239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:08.549852 chronyd[1627]: Selected source PHC0 Jan 23 17:32:08.638176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:08.641609 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:32:09.087904 kubelet[1884]: E0123 17:32:09.087842 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:32:09.090201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:32:09.090319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:32:09.090805 systemd[1]: kubelet.service: Consumed 143ms CPU time, 108.2M memory peak. Jan 23 17:32:10.304016 systemd[1]: Started sshd@5-10.0.6.147:22-4.153.228.146:46994.service - OpenSSH per-connection server daemon (4.153.228.146:46994). Jan 23 17:32:10.778019 sshd[1893]: Accepted publickey for core from 4.153.228.146 port 46994 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:32:10.779197 sshd-session[1893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:32:10.783276 systemd-logind[1643]: New session 7 of user core. Jan 23 17:32:10.794960 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:32:11.044237 sshd[1897]: Connection closed by 4.153.228.146 port 46994 Jan 23 17:32:11.044643 sshd-session[1893]: pam_unix(sshd:session): session closed for user core Jan 23 17:32:11.047806 systemd[1]: sshd@5-10.0.6.147:22-4.153.228.146:46994.service: Deactivated successfully. Jan 23 17:32:11.049293 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:32:11.051016 systemd-logind[1643]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:32:11.051686 systemd-logind[1643]: Removed session 7. Jan 23 17:32:11.147894 systemd[1]: Started sshd@6-10.0.6.147:22-4.153.228.146:47000.service - OpenSSH per-connection server daemon (4.153.228.146:47000). Jan 23 17:32:11.660148 sshd[1903]: Accepted publickey for core from 4.153.228.146 port 47000 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:32:11.661298 sshd-session[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:32:11.665666 systemd-logind[1643]: New session 8 of user core. Jan 23 17:32:11.680131 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:32:11.931591 sshd[1907]: Connection closed by 4.153.228.146 port 47000 Jan 23 17:32:11.932308 sshd-session[1903]: pam_unix(sshd:session): session closed for user core Jan 23 17:32:11.935874 systemd[1]: sshd@6-10.0.6.147:22-4.153.228.146:47000.service: Deactivated successfully. Jan 23 17:32:11.937276 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:32:11.937884 systemd-logind[1643]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:32:11.938765 systemd-logind[1643]: Removed session 8. Jan 23 17:32:12.028556 systemd[1]: Started sshd@7-10.0.6.147:22-4.153.228.146:47014.service - OpenSSH per-connection server daemon (4.153.228.146:47014). Jan 23 17:32:12.504071 sshd[1913]: Accepted publickey for core from 4.153.228.146 port 47014 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:32:12.505030 sshd-session[1913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:32:12.508562 systemd-logind[1643]: New session 9 of user core. Jan 23 17:32:12.515048 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:32:12.771012 sshd[1917]: Connection closed by 4.153.228.146 port 47014 Jan 23 17:32:12.771783 sshd-session[1913]: pam_unix(sshd:session): session closed for user core Jan 23 17:32:12.775328 systemd[1]: sshd@7-10.0.6.147:22-4.153.228.146:47014.service: Deactivated successfully. Jan 23 17:32:12.776908 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:32:12.777527 systemd-logind[1643]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:32:12.778355 systemd-logind[1643]: Removed session 9. Jan 23 17:32:12.876879 systemd[1]: Started sshd@8-10.0.6.147:22-4.153.228.146:47024.service - OpenSSH per-connection server daemon (4.153.228.146:47024). Jan 23 17:32:13.418601 sshd[1923]: Accepted publickey for core from 4.153.228.146 port 47024 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:32:13.419974 sshd-session[1923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:32:13.424637 systemd-logind[1643]: New session 10 of user core. Jan 23 17:32:13.435148 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 17:32:13.623909 sudo[1928]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:32:13.624163 sudo[1928]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:32:13.638047 sudo[1928]: pam_unix(sudo:session): session closed for user root Jan 23 17:32:13.733722 sshd[1927]: Connection closed by 4.153.228.146 port 47024 Jan 23 17:32:13.734262 sshd-session[1923]: pam_unix(sshd:session): session closed for user core Jan 23 17:32:13.738747 systemd[1]: sshd@8-10.0.6.147:22-4.153.228.146:47024.service: Deactivated successfully. Jan 23 17:32:13.740316 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 17:32:13.741580 systemd-logind[1643]: Session 10 logged out. Waiting for processes to exit. Jan 23 17:32:13.742514 systemd-logind[1643]: Removed session 10. Jan 23 17:32:13.846302 systemd[1]: Started sshd@9-10.0.6.147:22-4.153.228.146:47032.service - OpenSSH per-connection server daemon (4.153.228.146:47032). Jan 23 17:32:14.380379 sshd[1935]: Accepted publickey for core from 4.153.228.146 port 47032 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:32:14.381691 sshd-session[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:32:14.385810 systemd-logind[1643]: New session 11 of user core. Jan 23 17:32:14.403170 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 17:32:14.583632 sudo[1941]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:32:14.583912 sudo[1941]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:32:14.586576 sudo[1941]: pam_unix(sudo:session): session closed for user root Jan 23 17:32:14.592298 sudo[1940]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:32:14.592544 sudo[1940]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:32:14.599089 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:32:14.635000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 17:32:14.637030 augenrules[1965]: No rules Jan 23 17:32:14.638209 kernel: kauditd_printk_skb: 186 callbacks suppressed Jan 23 17:32:14.638271 kernel: audit: type=1305 audit(1769189534.635:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 17:32:14.638796 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:32:14.639020 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:32:14.635000 audit[1965]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff67cffc0 a2=420 a3=0 items=0 ppid=1946 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:14.639987 sudo[1940]: pam_unix(sudo:session): session closed for user root Jan 23 17:32:14.643302 kernel: audit: type=1300 audit(1769189534.635:230): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff67cffc0 a2=420 a3=0 items=0 ppid=1946 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:14.635000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:32:14.644857 kernel: audit: type=1327 audit(1769189534.635:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:32:14.644917 kernel: audit: type=1130 audit(1769189534.638:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.647387 kernel: audit: type=1131 audit(1769189534.638:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.649771 kernel: audit: type=1106 audit(1769189534.639:233): pid=1940 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.639000 audit[1940]: USER_END pid=1940 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.639000 audit[1940]: CRED_DISP pid=1940 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.654777 kernel: audit: type=1104 audit(1769189534.639:234): pid=1940 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.742849 sshd[1939]: Connection closed by 4.153.228.146 port 47032 Jan 23 17:32:14.743227 sshd-session[1935]: pam_unix(sshd:session): session closed for user core Jan 23 17:32:14.744000 audit[1935]: USER_END pid=1935 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:14.748074 systemd[1]: sshd@9-10.0.6.147:22-4.153.228.146:47032.service: Deactivated successfully. Jan 23 17:32:14.749548 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 17:32:14.744000 audit[1935]: CRED_DISP pid=1935 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:14.752522 kernel: audit: type=1106 audit(1769189534.744:235): pid=1935 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:14.752606 kernel: audit: type=1104 audit(1769189534.744:236): pid=1935 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:14.752548 systemd-logind[1643]: Session 11 logged out. Waiting for processes to exit. Jan 23 17:32:14.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.6.147:22-4.153.228.146:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.755590 kernel: audit: type=1131 audit(1769189534.747:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.6.147:22-4.153.228.146:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:14.756110 systemd-logind[1643]: Removed session 11. Jan 23 17:32:14.852431 systemd[1]: Started sshd@10-10.0.6.147:22-4.153.228.146:58480.service - OpenSSH per-connection server daemon (4.153.228.146:58480). Jan 23 17:32:14.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.6.147:22-4.153.228.146:58480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:15.391000 audit[1974]: USER_ACCT pid=1974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:15.392711 sshd[1974]: Accepted publickey for core from 4.153.228.146 port 58480 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:32:15.393000 audit[1974]: CRED_ACQ pid=1974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:15.393000 audit[1974]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb8adf50 a2=3 a3=0 items=0 ppid=1 pid=1974 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:15.393000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:32:15.394221 sshd-session[1974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:32:15.398648 systemd-logind[1643]: New session 12 of user core. Jan 23 17:32:15.413203 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 17:32:15.415000 audit[1974]: USER_START pid=1974 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:15.416000 audit[1978]: CRED_ACQ pid=1978 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:15.589000 audit[1979]: USER_ACCT pid=1979 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:15.589938 sudo[1979]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:32:15.589000 audit[1979]: CRED_REFR pid=1979 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:15.589000 audit[1979]: USER_START pid=1979 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:15.590205 sudo[1979]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:32:15.899465 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:32:15.920369 (dockerd)[2001]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:32:16.146450 dockerd[2001]: time="2026-01-23T17:32:16.146390914Z" level=info msg="Starting up" Jan 23 17:32:16.148239 dockerd[2001]: time="2026-01-23T17:32:16.148137922Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:32:16.158650 dockerd[2001]: time="2026-01-23T17:32:16.158540293Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:32:16.199675 dockerd[2001]: time="2026-01-23T17:32:16.199636615Z" level=info msg="Loading containers: start." Jan 23 17:32:16.210786 kernel: Initializing XFRM netlink socket Jan 23 17:32:16.255000 audit[2053]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.255000 audit[2053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd99d1f70 a2=0 a3=0 items=0 ppid=2001 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.255000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 17:32:16.257000 audit[2055]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.257000 audit[2055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc0961780 a2=0 a3=0 items=0 ppid=2001 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 17:32:16.259000 audit[2057]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.259000 audit[2057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff63f390 a2=0 a3=0 items=0 ppid=2001 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 17:32:16.260000 audit[2059]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.260000 audit[2059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe30de900 a2=0 a3=0 items=0 ppid=2001 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 17:32:16.262000 audit[2061]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.262000 audit[2061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee7e27a0 a2=0 a3=0 items=0 ppid=2001 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 17:32:16.264000 audit[2063]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.264000 audit[2063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc4700f60 a2=0 a3=0 items=0 ppid=2001 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.264000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:32:16.266000 audit[2065]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.266000 audit[2065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd8ab4ed0 a2=0 a3=0 items=0 ppid=2001 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 17:32:16.267000 audit[2067]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.267000 audit[2067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffedd2d290 a2=0 a3=0 items=0 ppid=2001 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 17:32:16.296000 audit[2070]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.296000 audit[2070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd4d0e410 a2=0 a3=0 items=0 ppid=2001 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 17:32:16.298000 audit[2072]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.298000 audit[2072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe63e4dd0 a2=0 a3=0 items=0 ppid=2001 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 17:32:16.300000 audit[2074]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.300000 audit[2074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffcd17fa90 a2=0 a3=0 items=0 ppid=2001 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.300000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 17:32:16.302000 audit[2076]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.302000 audit[2076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffeb214fb0 a2=0 a3=0 items=0 ppid=2001 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.302000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:32:16.303000 audit[2078]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.303000 audit[2078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe9811c60 a2=0 a3=0 items=0 ppid=2001 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 17:32:16.337000 audit[2108]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.337000 audit[2108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdabcab90 a2=0 a3=0 items=0 ppid=2001 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.337000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 17:32:16.339000 audit[2110]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.339000 audit[2110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc8b95470 a2=0 a3=0 items=0 ppid=2001 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.339000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 17:32:16.341000 audit[2112]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.341000 audit[2112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffea4b490 a2=0 a3=0 items=0 ppid=2001 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 17:32:16.342000 audit[2114]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.342000 audit[2114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff84e3750 a2=0 a3=0 items=0 ppid=2001 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.342000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 17:32:16.344000 audit[2116]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.344000 audit[2116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe0561010 a2=0 a3=0 items=0 ppid=2001 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.344000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 17:32:16.346000 audit[2118]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.346000 audit[2118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc7b5a120 a2=0 a3=0 items=0 ppid=2001 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:32:16.348000 audit[2120]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.348000 audit[2120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe1ae48a0 a2=0 a3=0 items=0 ppid=2001 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.348000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 17:32:16.350000 audit[2122]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.350000 audit[2122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffde880f10 a2=0 a3=0 items=0 ppid=2001 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.350000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 17:32:16.352000 audit[2124]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.352000 audit[2124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd1250190 a2=0 a3=0 items=0 ppid=2001 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.352000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 17:32:16.355000 audit[2126]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.355000 audit[2126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffc179950 a2=0 a3=0 items=0 ppid=2001 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 17:32:16.357000 audit[2128]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.357000 audit[2128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd9769030 a2=0 a3=0 items=0 ppid=2001 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 17:32:16.359000 audit[2130]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.359000 audit[2130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffffd089510 a2=0 a3=0 items=0 ppid=2001 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.359000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:32:16.360000 audit[2132]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.360000 audit[2132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe5d12b60 a2=0 a3=0 items=0 ppid=2001 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.360000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 17:32:16.365000 audit[2137]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.365000 audit[2137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe0d6be10 a2=0 a3=0 items=0 ppid=2001 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 17:32:16.367000 audit[2139]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.367000 audit[2139]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd51e2440 a2=0 a3=0 items=0 ppid=2001 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.367000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 17:32:16.369000 audit[2141]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.369000 audit[2141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc8ec1ff0 a2=0 a3=0 items=0 ppid=2001 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.369000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 17:32:16.371000 audit[2143]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.371000 audit[2143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff7637030 a2=0 a3=0 items=0 ppid=2001 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.371000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 17:32:16.373000 audit[2145]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.373000 audit[2145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd6bd0170 a2=0 a3=0 items=0 ppid=2001 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.373000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 17:32:16.375000 audit[2147]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:16.375000 audit[2147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe01a6840 a2=0 a3=0 items=0 ppid=2001 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.375000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 17:32:16.389000 audit[2152]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.389000 audit[2152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffee3c9730 a2=0 a3=0 items=0 ppid=2001 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.389000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 17:32:16.391000 audit[2155]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.391000 audit[2155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff8d9c980 a2=0 a3=0 items=0 ppid=2001 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.391000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 17:32:16.398000 audit[2163]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.398000 audit[2163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd00b3600 a2=0 a3=0 items=0 ppid=2001 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 17:32:16.407000 audit[2169]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.407000 audit[2169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffa1c3a40 a2=0 a3=0 items=0 ppid=2001 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.407000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 17:32:16.409000 audit[2171]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.409000 audit[2171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffc9cbbd80 a2=0 a3=0 items=0 ppid=2001 pid=2171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.409000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 17:32:16.411000 audit[2173]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.411000 audit[2173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe95127a0 a2=0 a3=0 items=0 ppid=2001 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.411000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 17:32:16.412000 audit[2175]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.412000 audit[2175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc340f8a0 a2=0 a3=0 items=0 ppid=2001 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 17:32:16.414000 audit[2177]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:16.414000 audit[2177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff1d74c40 a2=0 a3=0 items=0 ppid=2001 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:16.414000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 17:32:16.415892 systemd-networkd[1580]: docker0: Link UP Jan 23 17:32:16.420306 dockerd[2001]: time="2026-01-23T17:32:16.420230937Z" level=info msg="Loading containers: done." Jan 23 17:32:16.432839 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1733668017-merged.mount: Deactivated successfully. Jan 23 17:32:16.443250 dockerd[2001]: time="2026-01-23T17:32:16.443176850Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:32:16.443481 dockerd[2001]: time="2026-01-23T17:32:16.443267010Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:32:16.443481 dockerd[2001]: time="2026-01-23T17:32:16.443438971Z" level=info msg="Initializing buildkit" Jan 23 17:32:16.465477 dockerd[2001]: time="2026-01-23T17:32:16.465416239Z" level=info msg="Completed buildkit initialization" Jan 23 17:32:16.471898 dockerd[2001]: time="2026-01-23T17:32:16.471851510Z" level=info msg="Daemon has completed initialization" Jan 23 17:32:16.472105 dockerd[2001]: time="2026-01-23T17:32:16.471917951Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:32:16.472488 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:32:16.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:17.468343 containerd[1668]: time="2026-01-23T17:32:17.468297158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 17:32:18.163968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856599453.mount: Deactivated successfully. Jan 23 17:32:18.898018 containerd[1668]: time="2026-01-23T17:32:18.897962011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:18.899780 containerd[1668]: time="2026-01-23T17:32:18.899708820Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Jan 23 17:32:18.900917 containerd[1668]: time="2026-01-23T17:32:18.900877186Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:18.904495 containerd[1668]: time="2026-01-23T17:32:18.904454403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:18.905421 containerd[1668]: time="2026-01-23T17:32:18.905385568Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.43704913s" Jan 23 17:32:18.905459 containerd[1668]: time="2026-01-23T17:32:18.905420648Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 17:32:18.907080 containerd[1668]: time="2026-01-23T17:32:18.907044896Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 17:32:19.341034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:32:19.342450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:19.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:19.473411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:19.488055 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:32:19.520505 kubelet[2284]: E0123 17:32:19.520384 2284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:32:19.523207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:32:19.523414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:32:19.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:32:19.525841 systemd[1]: kubelet.service: Consumed 135ms CPU time, 106.8M memory peak. Jan 23 17:32:20.303556 containerd[1668]: time="2026-01-23T17:32:20.303508026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:20.307774 containerd[1668]: time="2026-01-23T17:32:20.306976963Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Jan 23 17:32:20.309031 containerd[1668]: time="2026-01-23T17:32:20.309001453Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:20.312007 containerd[1668]: time="2026-01-23T17:32:20.311972188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:20.313285 containerd[1668]: time="2026-01-23T17:32:20.313238554Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.406162898s" Jan 23 17:32:20.313285 containerd[1668]: time="2026-01-23T17:32:20.313281354Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 17:32:20.313985 containerd[1668]: time="2026-01-23T17:32:20.313943278Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 17:32:21.216623 containerd[1668]: time="2026-01-23T17:32:21.216564946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:21.218023 containerd[1668]: time="2026-01-23T17:32:21.217972912Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=0" Jan 23 17:32:21.219064 containerd[1668]: time="2026-01-23T17:32:21.219030438Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:21.222312 containerd[1668]: time="2026-01-23T17:32:21.221868852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:21.222957 containerd[1668]: time="2026-01-23T17:32:21.222931777Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 908.955019ms" Jan 23 17:32:21.222999 containerd[1668]: time="2026-01-23T17:32:21.222960537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 17:32:21.223595 containerd[1668]: time="2026-01-23T17:32:21.223572540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 17:32:22.091860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067572118.mount: Deactivated successfully. Jan 23 17:32:22.321857 containerd[1668]: time="2026-01-23T17:32:22.321807047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:22.322774 containerd[1668]: time="2026-01-23T17:32:22.322641452Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Jan 23 17:32:22.324172 containerd[1668]: time="2026-01-23T17:32:22.324120739Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:22.326360 containerd[1668]: time="2026-01-23T17:32:22.326333830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:22.327095 containerd[1668]: time="2026-01-23T17:32:22.326785072Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.103184012s" Jan 23 17:32:22.327095 containerd[1668]: time="2026-01-23T17:32:22.326815552Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 17:32:22.327375 containerd[1668]: time="2026-01-23T17:32:22.327353795Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 17:32:23.099075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343389216.mount: Deactivated successfully. Jan 23 17:32:23.514304 containerd[1668]: time="2026-01-23T17:32:23.514157577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:23.515330 containerd[1668]: time="2026-01-23T17:32:23.515267902Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Jan 23 17:32:23.516200 containerd[1668]: time="2026-01-23T17:32:23.516157947Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:23.520069 containerd[1668]: time="2026-01-23T17:32:23.520020725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:23.521077 containerd[1668]: time="2026-01-23T17:32:23.520857450Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.193469855s" Jan 23 17:32:23.521077 containerd[1668]: time="2026-01-23T17:32:23.520891090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 17:32:23.521399 containerd[1668]: time="2026-01-23T17:32:23.521372452Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 17:32:24.080102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681893288.mount: Deactivated successfully. Jan 23 17:32:24.085693 containerd[1668]: time="2026-01-23T17:32:24.085624020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:32:24.086556 containerd[1668]: time="2026-01-23T17:32:24.086485744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:32:24.087760 containerd[1668]: time="2026-01-23T17:32:24.087705350Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:32:24.089929 containerd[1668]: time="2026-01-23T17:32:24.089887761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:32:24.091226 containerd[1668]: time="2026-01-23T17:32:24.091155687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 569.740235ms" Jan 23 17:32:24.091226 containerd[1668]: time="2026-01-23T17:32:24.091192967Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 17:32:24.091883 containerd[1668]: time="2026-01-23T17:32:24.091849091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 17:32:24.594749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122649083.mount: Deactivated successfully. Jan 23 17:32:26.027985 containerd[1668]: time="2026-01-23T17:32:26.027790940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:26.030192 containerd[1668]: time="2026-01-23T17:32:26.030139112Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Jan 23 17:32:26.031650 containerd[1668]: time="2026-01-23T17:32:26.031614199Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:26.034686 containerd[1668]: time="2026-01-23T17:32:26.034654534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:26.036308 containerd[1668]: time="2026-01-23T17:32:26.036280822Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.944400131s" Jan 23 17:32:26.036401 containerd[1668]: time="2026-01-23T17:32:26.036386422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 17:32:29.729838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 17:32:29.731979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:30.010162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:30.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:30.013867 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 23 17:32:30.013934 kernel: audit: type=1130 audit(1769189550.009:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:30.025280 (kubelet)[2450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:32:30.140049 kubelet[2450]: E0123 17:32:30.139997 2450 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:32:30.142523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:32:30.142779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:32:30.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:32:30.143449 systemd[1]: kubelet.service: Consumed 136ms CPU time, 105.8M memory peak. Jan 23 17:32:30.146789 kernel: audit: type=1131 audit(1769189550.142:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:32:30.484556 update_engine[1646]: I20260123 17:32:30.484058 1646 update_attempter.cc:509] Updating boot flags... Jan 23 17:32:31.956227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:31.956382 systemd[1]: kubelet.service: Consumed 136ms CPU time, 105.8M memory peak. Jan 23 17:32:31.961059 kernel: audit: type=1130 audit(1769189551.955:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:31.961123 kernel: audit: type=1131 audit(1769189551.955:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:31.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:31.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:31.958319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:31.983196 systemd[1]: Reload requested from client PID 2482 ('systemctl') (unit session-12.scope)... Jan 23 17:32:31.983216 systemd[1]: Reloading... Jan 23 17:32:32.076858 zram_generator::config[2523]: No configuration found. Jan 23 17:32:32.244318 systemd[1]: Reloading finished in 260 ms. Jan 23 17:32:32.270000 audit: BPF prog-id=63 op=LOAD Jan 23 17:32:32.270000 audit: BPF prog-id=57 op=UNLOAD Jan 23 17:32:32.271000 audit: BPF prog-id=64 op=LOAD Jan 23 17:32:32.271000 audit: BPF prog-id=48 op=UNLOAD Jan 23 17:32:32.271000 audit: BPF prog-id=65 op=LOAD Jan 23 17:32:32.273770 kernel: audit: type=1334 audit(1769189552.270:294): prog-id=63 op=LOAD Jan 23 17:32:32.273805 kernel: audit: type=1334 audit(1769189552.270:295): prog-id=57 op=UNLOAD Jan 23 17:32:32.273821 kernel: audit: type=1334 audit(1769189552.271:296): prog-id=64 op=LOAD Jan 23 17:32:32.273838 kernel: audit: type=1334 audit(1769189552.271:297): prog-id=48 op=UNLOAD Jan 23 17:32:32.273854 kernel: audit: type=1334 audit(1769189552.271:298): prog-id=65 op=LOAD Jan 23 17:32:32.273870 kernel: audit: type=1334 audit(1769189552.271:299): prog-id=66 op=LOAD Jan 23 17:32:32.271000 audit: BPF prog-id=66 op=LOAD Jan 23 17:32:32.271000 audit: BPF prog-id=49 op=UNLOAD Jan 23 17:32:32.271000 audit: BPF prog-id=50 op=UNLOAD Jan 23 17:32:32.272000 audit: BPF prog-id=67 op=LOAD Jan 23 17:32:32.272000 audit: BPF prog-id=60 op=UNLOAD Jan 23 17:32:32.272000 audit: BPF prog-id=68 op=LOAD Jan 23 17:32:32.272000 audit: BPF prog-id=69 op=LOAD Jan 23 17:32:32.272000 audit: BPF prog-id=61 op=UNLOAD Jan 23 17:32:32.272000 audit: BPF prog-id=62 op=UNLOAD Jan 23 17:32:32.274000 audit: BPF prog-id=70 op=LOAD Jan 23 17:32:32.274000 audit: BPF prog-id=54 op=UNLOAD Jan 23 17:32:32.274000 audit: BPF prog-id=71 op=LOAD Jan 23 17:32:32.274000 audit: BPF prog-id=72 op=LOAD Jan 23 17:32:32.274000 audit: BPF prog-id=55 op=UNLOAD Jan 23 17:32:32.274000 audit: BPF prog-id=56 op=UNLOAD Jan 23 17:32:32.275000 audit: BPF prog-id=73 op=LOAD Jan 23 17:32:32.275000 audit: BPF prog-id=58 op=UNLOAD Jan 23 17:32:32.277000 audit: BPF prog-id=74 op=LOAD Jan 23 17:32:32.277000 audit: BPF prog-id=75 op=LOAD Jan 23 17:32:32.277000 audit: BPF prog-id=46 op=UNLOAD Jan 23 17:32:32.277000 audit: BPF prog-id=47 op=UNLOAD Jan 23 17:32:32.289000 audit: BPF prog-id=76 op=LOAD Jan 23 17:32:32.289000 audit: BPF prog-id=59 op=UNLOAD Jan 23 17:32:32.289000 audit: BPF prog-id=77 op=LOAD Jan 23 17:32:32.289000 audit: BPF prog-id=51 op=UNLOAD Jan 23 17:32:32.289000 audit: BPF prog-id=78 op=LOAD Jan 23 17:32:32.289000 audit: BPF prog-id=79 op=LOAD Jan 23 17:32:32.289000 audit: BPF prog-id=52 op=UNLOAD Jan 23 17:32:32.289000 audit: BPF prog-id=53 op=UNLOAD Jan 23 17:32:32.290000 audit: BPF prog-id=80 op=LOAD Jan 23 17:32:32.290000 audit: BPF prog-id=43 op=UNLOAD Jan 23 17:32:32.290000 audit: BPF prog-id=81 op=LOAD Jan 23 17:32:32.290000 audit: BPF prog-id=82 op=LOAD Jan 23 17:32:32.290000 audit: BPF prog-id=44 op=UNLOAD Jan 23 17:32:32.290000 audit: BPF prog-id=45 op=UNLOAD Jan 23 17:32:32.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:32.303226 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:32.306554 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:32:32.306854 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:32.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:32.306921 systemd[1]: kubelet.service: Consumed 100ms CPU time, 95.3M memory peak. Jan 23 17:32:32.308714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:32.875956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:32.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:32.880495 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:32:33.276846 kubelet[2577]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:32:33.276846 kubelet[2577]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:32:33.276846 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:32:33.277179 kubelet[2577]: I0123 17:32:33.276879 2577 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:32:34.150719 kubelet[2577]: I0123 17:32:34.150669 2577 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 17:32:34.150719 kubelet[2577]: I0123 17:32:34.150702 2577 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:32:34.150933 kubelet[2577]: I0123 17:32:34.150916 2577 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:32:34.177260 kubelet[2577]: E0123 17:32:34.177167 2577 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.6.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.6.147:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:32:34.177260 kubelet[2577]: I0123 17:32:34.177205 2577 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:32:34.188267 kubelet[2577]: I0123 17:32:34.188241 2577 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:32:34.191139 kubelet[2577]: I0123 17:32:34.191099 2577 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:32:34.191804 kubelet[2577]: I0123 17:32:34.191550 2577 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:32:34.191804 kubelet[2577]: I0123 17:32:34.191585 2577 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-1-0-4-2c8b61c80e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:32:34.191932 kubelet[2577]: I0123 17:32:34.191849 2577 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:32:34.191932 kubelet[2577]: I0123 17:32:34.191861 2577 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 17:32:34.192098 kubelet[2577]: I0123 17:32:34.192079 2577 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:32:34.196363 kubelet[2577]: I0123 17:32:34.196320 2577 kubelet.go:480] "Attempting to sync node with API server" Jan 23 17:32:34.196363 kubelet[2577]: I0123 17:32:34.196353 2577 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:32:34.196474 kubelet[2577]: I0123 17:32:34.196458 2577 kubelet.go:386] "Adding apiserver pod source" Jan 23 17:32:34.198899 kubelet[2577]: I0123 17:32:34.198867 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:32:34.200624 kubelet[2577]: I0123 17:32:34.200540 2577 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 17:32:34.201544 kubelet[2577]: E0123 17:32:34.201493 2577 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.6.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.6.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:32:34.202117 kubelet[2577]: I0123 17:32:34.202095 2577 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:32:34.202249 kubelet[2577]: W0123 17:32:34.202236 2577 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:32:34.202787 kubelet[2577]: E0123 17:32:34.202761 2577 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.6.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-1-0-4-2c8b61c80e&limit=500&resourceVersion=0\": dial tcp 10.0.6.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:32:34.204693 kubelet[2577]: I0123 17:32:34.204657 2577 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:32:34.204783 kubelet[2577]: I0123 17:32:34.204703 2577 server.go:1289] "Started kubelet" Jan 23 17:32:34.204888 kubelet[2577]: I0123 17:32:34.204859 2577 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:32:34.205889 kubelet[2577]: I0123 17:32:34.205870 2577 server.go:317] "Adding debug handlers to kubelet server" Jan 23 17:32:34.207445 kubelet[2577]: I0123 17:32:34.207380 2577 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:32:34.207604 kubelet[2577]: I0123 17:32:34.207580 2577 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:32:34.207672 kubelet[2577]: I0123 17:32:34.207659 2577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:32:34.208510 kubelet[2577]: I0123 17:32:34.208481 2577 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:32:34.212012 kubelet[2577]: E0123 17:32:34.211925 2577 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" Jan 23 17:32:34.212226 kubelet[2577]: I0123 17:32:34.212200 2577 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:32:34.212365 kubelet[2577]: I0123 17:32:34.212291 2577 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:32:34.212365 kubelet[2577]: I0123 17:32:34.212340 2577 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:32:34.212518 kubelet[2577]: I0123 17:32:34.212492 2577 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:32:34.212633 kubelet[2577]: I0123 17:32:34.212599 2577 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:32:34.212769 kubelet[2577]: E0123 17:32:34.212716 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-1-0-4-2c8b61c80e?timeout=10s\": dial tcp 10.0.6.147:6443: connect: connection refused" interval="200ms" Jan 23 17:32:34.212934 kubelet[2577]: E0123 17:32:34.212490 2577 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.6.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.6.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:32:34.213563 kubelet[2577]: E0123 17:32:34.213497 2577 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:32:34.213685 kubelet[2577]: I0123 17:32:34.213601 2577 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:32:34.214788 kubelet[2577]: E0123 17:32:34.212713 2577 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.6.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.6.147:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-1-0-4-2c8b61c80e.188d6c86a6b3818e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-1-0-4-2c8b61c80e,UID:ci-4547-1-0-4-2c8b61c80e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-1-0-4-2c8b61c80e,},FirstTimestamp:2026-01-23 17:32:34.204672398 +0000 UTC m=+1.320953400,LastTimestamp:2026-01-23 17:32:34.204672398 +0000 UTC m=+1.320953400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-1-0-4-2c8b61c80e,}" Jan 23 17:32:34.217000 audit[2594]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:34.217000 audit[2594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff72456f0 a2=0 a3=0 items=0 ppid=2577 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.217000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 17:32:34.219000 audit[2595]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:34.219000 audit[2595]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff04ec0b0 a2=0 a3=0 items=0 ppid=2577 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 17:32:34.222000 audit[2599]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:34.222000 audit[2599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcaf9c410 a2=0 a3=0 items=0 ppid=2577 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:32:34.223000 audit[2602]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:34.223000 audit[2602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd0845420 a2=0 a3=0 items=0 ppid=2577 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:32:34.230155 kubelet[2577]: I0123 17:32:34.230108 2577 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:32:34.230450 kubelet[2577]: I0123 17:32:34.230243 2577 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:32:34.230450 kubelet[2577]: I0123 17:32:34.230262 2577 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:32:34.233445 kubelet[2577]: I0123 17:32:34.233423 2577 policy_none.go:49] "None policy: Start" Jan 23 17:32:34.233661 kubelet[2577]: I0123 17:32:34.233647 2577 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:32:34.233727 kubelet[2577]: I0123 17:32:34.233718 2577 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:32:34.234000 audit[2605]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:34.234000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff6820050 a2=0 a3=0 items=0 ppid=2577 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 23 17:32:34.236687 kubelet[2577]: I0123 17:32:34.235843 2577 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 17:32:34.236000 audit[2606]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:34.236000 audit[2606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc8318940 a2=0 a3=0 items=0 ppid=2577 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 17:32:34.236000 audit[2607]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:34.236000 audit[2607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff6ab4ed0 a2=0 a3=0 items=0 ppid=2577 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 17:32:34.237483 kubelet[2577]: I0123 17:32:34.237465 2577 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 17:32:34.237561 kubelet[2577]: I0123 17:32:34.237551 2577 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 17:32:34.237627 kubelet[2577]: I0123 17:32:34.237615 2577 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:32:34.237672 kubelet[2577]: I0123 17:32:34.237665 2577 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 17:32:34.237832 kubelet[2577]: E0123 17:32:34.237735 2577 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:32:34.237000 audit[2609]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:34.237000 audit[2609]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe113df60 a2=0 a3=0 items=0 ppid=2577 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 17:32:34.238000 audit[2610]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:34.238000 audit[2610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9682510 a2=0 a3=0 items=0 ppid=2577 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 17:32:34.239000 audit[2611]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:34.239000 audit[2611]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa15bcf0 a2=0 a3=0 items=0 ppid=2577 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 17:32:34.239000 audit[2612]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:34.239000 audit[2612]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc96b5b40 a2=0 a3=0 items=0 ppid=2577 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 17:32:34.241270 kubelet[2577]: E0123 17:32:34.241235 2577 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.6.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.6.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:32:34.241000 audit[2613]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:34.241000 audit[2613]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeef46900 a2=0 a3=0 items=0 ppid=2577 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.241000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 17:32:34.243448 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:32:34.256647 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:32:34.259635 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:32:34.282119 kubelet[2577]: E0123 17:32:34.282086 2577 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:32:34.282418 kubelet[2577]: I0123 17:32:34.282291 2577 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:32:34.282418 kubelet[2577]: I0123 17:32:34.282304 2577 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:32:34.282599 kubelet[2577]: I0123 17:32:34.282554 2577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:32:34.283479 kubelet[2577]: E0123 17:32:34.283442 2577 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:32:34.283551 kubelet[2577]: E0123 17:32:34.283495 2577 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-1-0-4-2c8b61c80e\" not found" Jan 23 17:32:34.350269 systemd[1]: Created slice kubepods-burstable-podf59a9cf0afcd6832e11ebb2127f6bf74.slice - libcontainer container kubepods-burstable-podf59a9cf0afcd6832e11ebb2127f6bf74.slice. Jan 23 17:32:34.384175 kubelet[2577]: I0123 17:32:34.384140 2577 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.385764 kubelet[2577]: E0123 17:32:34.385075 2577 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.6.147:6443/api/v1/nodes\": dial tcp 10.0.6.147:6443: connect: connection refused" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.385764 kubelet[2577]: E0123 17:32:34.385249 2577 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.388412 systemd[1]: Created slice kubepods-burstable-pode7821995b38876e88410ed7b6267609f.slice - libcontainer container kubepods-burstable-pode7821995b38876e88410ed7b6267609f.slice. Jan 23 17:32:34.390406 kubelet[2577]: E0123 17:32:34.390384 2577 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.392440 systemd[1]: Created slice kubepods-burstable-podea521ad569d1899152a9c3f02bf4e664.slice - libcontainer container kubepods-burstable-podea521ad569d1899152a9c3f02bf4e664.slice. Jan 23 17:32:34.393940 kubelet[2577]: E0123 17:32:34.393908 2577 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.413820 kubelet[2577]: E0123 17:32:34.413685 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-1-0-4-2c8b61c80e?timeout=10s\": dial tcp 10.0.6.147:6443: connect: connection refused" interval="400ms" Jan 23 17:32:34.514079 kubelet[2577]: I0123 17:32:34.513972 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f59a9cf0afcd6832e11ebb2127f6bf74-ca-certs\") pod \"kube-apiserver-ci-4547-1-0-4-2c8b61c80e\" (UID: \"f59a9cf0afcd6832e11ebb2127f6bf74\") " pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.514079 kubelet[2577]: I0123 17:32:34.514035 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f59a9cf0afcd6832e11ebb2127f6bf74-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-1-0-4-2c8b61c80e\" (UID: \"f59a9cf0afcd6832e11ebb2127f6bf74\") " pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.514309 kubelet[2577]: I0123 17:32:34.514193 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.514309 kubelet[2577]: I0123 17:32:34.514274 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-k8s-certs\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.514388 kubelet[2577]: I0123 17:32:34.514347 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-kubeconfig\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.514388 kubelet[2577]: I0123 17:32:34.514372 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea521ad569d1899152a9c3f02bf4e664-kubeconfig\") pod \"kube-scheduler-ci-4547-1-0-4-2c8b61c80e\" (UID: \"ea521ad569d1899152a9c3f02bf4e664\") " pod="kube-system/kube-scheduler-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.514455 kubelet[2577]: I0123 17:32:34.514400 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f59a9cf0afcd6832e11ebb2127f6bf74-k8s-certs\") pod \"kube-apiserver-ci-4547-1-0-4-2c8b61c80e\" (UID: \"f59a9cf0afcd6832e11ebb2127f6bf74\") " pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.514455 kubelet[2577]: I0123 17:32:34.514424 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-ca-certs\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.514520 kubelet[2577]: I0123 17:32:34.514458 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.587467 kubelet[2577]: I0123 17:32:34.587439 2577 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.587823 kubelet[2577]: E0123 17:32:34.587792 2577 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.6.147:6443/api/v1/nodes\": dial tcp 10.0.6.147:6443: connect: connection refused" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:34.687553 containerd[1668]: time="2026-01-23T17:32:34.687174845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-1-0-4-2c8b61c80e,Uid:f59a9cf0afcd6832e11ebb2127f6bf74,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:34.691972 containerd[1668]: time="2026-01-23T17:32:34.691933749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-1-0-4-2c8b61c80e,Uid:e7821995b38876e88410ed7b6267609f,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:34.695896 containerd[1668]: time="2026-01-23T17:32:34.695768207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-1-0-4-2c8b61c80e,Uid:ea521ad569d1899152a9c3f02bf4e664,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:34.708566 containerd[1668]: time="2026-01-23T17:32:34.708509670Z" level=info msg="connecting to shim 9be4a4ab32a62f2583ec3f3edb0dbcc52fda2f255d70f956cbb83362cec3a024" address="unix:///run/containerd/s/2ab77da1995fe5e27dda484fa9953b2e519e8056a8a58742549424f24d38e99e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:34.727118 containerd[1668]: time="2026-01-23T17:32:34.727071001Z" level=info msg="connecting to shim 47409e2447e6e4077f15a10fe271d1bc1294f0526e4276965472105749ff31c3" address="unix:///run/containerd/s/1dcc55d8d4d3c45d28465ec0e863f6f4e21607eadf312b84dbcc6dde08186c94" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:34.740040 systemd[1]: Started cri-containerd-9be4a4ab32a62f2583ec3f3edb0dbcc52fda2f255d70f956cbb83362cec3a024.scope - libcontainer container 9be4a4ab32a62f2583ec3f3edb0dbcc52fda2f255d70f956cbb83362cec3a024. Jan 23 17:32:34.746686 containerd[1668]: time="2026-01-23T17:32:34.746240615Z" level=info msg="connecting to shim 73d763fdf8dffe6ca275c903521e47e7610bc20924b29199b574923cc98c396a" address="unix:///run/containerd/s/0e9764bff20e28a437df63199d0e3c3b87260e9fac32d50afa1ebab2f92af1db" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:34.755992 systemd[1]: Started cri-containerd-47409e2447e6e4077f15a10fe271d1bc1294f0526e4276965472105749ff31c3.scope - libcontainer container 47409e2447e6e4077f15a10fe271d1bc1294f0526e4276965472105749ff31c3. Jan 23 17:32:34.757000 audit: BPF prog-id=83 op=LOAD Jan 23 17:32:34.758000 audit: BPF prog-id=84 op=LOAD Jan 23 17:32:34.758000 audit[2634]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2623 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653461346162333261363266323538336563336633656462306462 Jan 23 17:32:34.759000 audit: BPF prog-id=84 op=UNLOAD Jan 23 17:32:34.759000 audit[2634]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2623 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653461346162333261363266323538336563336633656462306462 Jan 23 17:32:34.759000 audit: BPF prog-id=85 op=LOAD Jan 23 17:32:34.759000 audit[2634]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2623 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653461346162333261363266323538336563336633656462306462 Jan 23 17:32:34.759000 audit: BPF prog-id=86 op=LOAD Jan 23 17:32:34.759000 audit[2634]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2623 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653461346162333261363266323538336563336633656462306462 Jan 23 17:32:34.759000 audit: BPF prog-id=86 op=UNLOAD Jan 23 17:32:34.759000 audit[2634]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2623 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653461346162333261363266323538336563336633656462306462 Jan 23 17:32:34.759000 audit: BPF prog-id=85 op=UNLOAD Jan 23 17:32:34.759000 audit[2634]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2623 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653461346162333261363266323538336563336633656462306462 Jan 23 17:32:34.759000 audit: BPF prog-id=87 op=LOAD Jan 23 17:32:34.759000 audit[2634]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2623 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653461346162333261363266323538336563336633656462306462 Jan 23 17:32:34.766000 audit: BPF prog-id=88 op=LOAD Jan 23 17:32:34.767000 audit: BPF prog-id=89 op=LOAD Jan 23 17:32:34.767000 audit[2666]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2654 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437343039653234343765366534303737663135613130666532373164 Jan 23 17:32:34.767000 audit: BPF prog-id=89 op=UNLOAD Jan 23 17:32:34.767000 audit[2666]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2654 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437343039653234343765366534303737663135613130666532373164 Jan 23 17:32:34.767000 audit: BPF prog-id=90 op=LOAD Jan 23 17:32:34.767000 audit[2666]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2654 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437343039653234343765366534303737663135613130666532373164 Jan 23 17:32:34.767000 audit: BPF prog-id=91 op=LOAD Jan 23 17:32:34.767000 audit[2666]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2654 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437343039653234343765366534303737663135613130666532373164 Jan 23 17:32:34.767000 audit: BPF prog-id=91 op=UNLOAD Jan 23 17:32:34.767000 audit[2666]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2654 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437343039653234343765366534303737663135613130666532373164 Jan 23 17:32:34.767000 audit: BPF prog-id=90 op=UNLOAD Jan 23 17:32:34.767000 audit[2666]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2654 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437343039653234343765366534303737663135613130666532373164 Jan 23 17:32:34.767000 audit: BPF prog-id=92 op=LOAD Jan 23 17:32:34.767000 audit[2666]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2654 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437343039653234343765366534303737663135613130666532373164 Jan 23 17:32:34.778017 systemd[1]: Started cri-containerd-73d763fdf8dffe6ca275c903521e47e7610bc20924b29199b574923cc98c396a.scope - libcontainer container 73d763fdf8dffe6ca275c903521e47e7610bc20924b29199b574923cc98c396a. Jan 23 17:32:34.792000 audit: BPF prog-id=93 op=LOAD Jan 23 17:32:34.793000 audit: BPF prog-id=94 op=LOAD Jan 23 17:32:34.793000 audit[2704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2687 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643736336664663864666665366361323735633930333532316534 Jan 23 17:32:34.793000 audit: BPF prog-id=94 op=UNLOAD Jan 23 17:32:34.793000 audit[2704]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2687 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643736336664663864666665366361323735633930333532316534 Jan 23 17:32:34.793000 audit: BPF prog-id=95 op=LOAD Jan 23 17:32:34.793000 audit[2704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2687 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643736336664663864666665366361323735633930333532316534 Jan 23 17:32:34.793000 audit: BPF prog-id=96 op=LOAD Jan 23 17:32:34.793000 audit[2704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2687 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643736336664663864666665366361323735633930333532316534 Jan 23 17:32:34.794000 audit: BPF prog-id=96 op=UNLOAD Jan 23 17:32:34.794000 audit[2704]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2687 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643736336664663864666665366361323735633930333532316534 Jan 23 17:32:34.794000 audit: BPF prog-id=95 op=UNLOAD Jan 23 17:32:34.794000 audit[2704]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2687 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643736336664663864666665366361323735633930333532316534 Jan 23 17:32:34.795614 containerd[1668]: time="2026-01-23T17:32:34.793899969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-1-0-4-2c8b61c80e,Uid:f59a9cf0afcd6832e11ebb2127f6bf74,Namespace:kube-system,Attempt:0,} returns sandbox id \"9be4a4ab32a62f2583ec3f3edb0dbcc52fda2f255d70f956cbb83362cec3a024\"" Jan 23 17:32:34.794000 audit: BPF prog-id=97 op=LOAD Jan 23 17:32:34.794000 audit[2704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2687 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643736336664663864666665366361323735633930333532316534 Jan 23 17:32:34.796336 containerd[1668]: time="2026-01-23T17:32:34.796301021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-1-0-4-2c8b61c80e,Uid:e7821995b38876e88410ed7b6267609f,Namespace:kube-system,Attempt:0,} returns sandbox id \"47409e2447e6e4077f15a10fe271d1bc1294f0526e4276965472105749ff31c3\"" Jan 23 17:32:34.800430 containerd[1668]: time="2026-01-23T17:32:34.800378721Z" level=info msg="CreateContainer within sandbox \"9be4a4ab32a62f2583ec3f3edb0dbcc52fda2f255d70f956cbb83362cec3a024\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:32:34.801635 containerd[1668]: time="2026-01-23T17:32:34.801595727Z" level=info msg="CreateContainer within sandbox \"47409e2447e6e4077f15a10fe271d1bc1294f0526e4276965472105749ff31c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:32:34.810224 containerd[1668]: time="2026-01-23T17:32:34.810122528Z" level=info msg="Container aa707e5d6007fc8c440596c9fd7a3d98a453bb57a0b309ef2636bef6a2f597bb: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:34.812763 containerd[1668]: time="2026-01-23T17:32:34.812703141Z" level=info msg="Container 607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:34.814778 kubelet[2577]: E0123 17:32:34.814351 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-1-0-4-2c8b61c80e?timeout=10s\": dial tcp 10.0.6.147:6443: connect: connection refused" interval="800ms" Jan 23 17:32:34.820012 containerd[1668]: time="2026-01-23T17:32:34.819969337Z" level=info msg="CreateContainer within sandbox \"9be4a4ab32a62f2583ec3f3edb0dbcc52fda2f255d70f956cbb83362cec3a024\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa707e5d6007fc8c440596c9fd7a3d98a453bb57a0b309ef2636bef6a2f597bb\"" Jan 23 17:32:34.821135 containerd[1668]: time="2026-01-23T17:32:34.821112342Z" level=info msg="StartContainer for \"aa707e5d6007fc8c440596c9fd7a3d98a453bb57a0b309ef2636bef6a2f597bb\"" Jan 23 17:32:34.822811 containerd[1668]: time="2026-01-23T17:32:34.822784030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-1-0-4-2c8b61c80e,Uid:ea521ad569d1899152a9c3f02bf4e664,Namespace:kube-system,Attempt:0,} returns sandbox id \"73d763fdf8dffe6ca275c903521e47e7610bc20924b29199b574923cc98c396a\"" Jan 23 17:32:34.825304 containerd[1668]: time="2026-01-23T17:32:34.825267723Z" level=info msg="CreateContainer within sandbox \"47409e2447e6e4077f15a10fe271d1bc1294f0526e4276965472105749ff31c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea\"" Jan 23 17:32:34.825604 containerd[1668]: time="2026-01-23T17:32:34.825579764Z" level=info msg="connecting to shim aa707e5d6007fc8c440596c9fd7a3d98a453bb57a0b309ef2636bef6a2f597bb" address="unix:///run/containerd/s/2ab77da1995fe5e27dda484fa9953b2e519e8056a8a58742549424f24d38e99e" protocol=ttrpc version=3 Jan 23 17:32:34.826745 containerd[1668]: time="2026-01-23T17:32:34.826720370Z" level=info msg="StartContainer for \"607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea\"" Jan 23 17:32:34.828151 containerd[1668]: time="2026-01-23T17:32:34.828116817Z" level=info msg="connecting to shim 607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea" address="unix:///run/containerd/s/1dcc55d8d4d3c45d28465ec0e863f6f4e21607eadf312b84dbcc6dde08186c94" protocol=ttrpc version=3 Jan 23 17:32:34.829889 containerd[1668]: time="2026-01-23T17:32:34.828964581Z" level=info msg="CreateContainer within sandbox \"73d763fdf8dffe6ca275c903521e47e7610bc20924b29199b574923cc98c396a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:32:34.836605 containerd[1668]: time="2026-01-23T17:32:34.836570098Z" level=info msg="Container 4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:34.845001 systemd[1]: Started cri-containerd-aa707e5d6007fc8c440596c9fd7a3d98a453bb57a0b309ef2636bef6a2f597bb.scope - libcontainer container aa707e5d6007fc8c440596c9fd7a3d98a453bb57a0b309ef2636bef6a2f597bb. Jan 23 17:32:34.845518 containerd[1668]: time="2026-01-23T17:32:34.845468622Z" level=info msg="CreateContainer within sandbox \"73d763fdf8dffe6ca275c903521e47e7610bc20924b29199b574923cc98c396a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be\"" Jan 23 17:32:34.847051 containerd[1668]: time="2026-01-23T17:32:34.847017949Z" level=info msg="StartContainer for \"4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be\"" Jan 23 17:32:34.848655 systemd[1]: Started cri-containerd-607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea.scope - libcontainer container 607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea. Jan 23 17:32:34.851003 containerd[1668]: time="2026-01-23T17:32:34.850918168Z" level=info msg="connecting to shim 4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be" address="unix:///run/containerd/s/0e9764bff20e28a437df63199d0e3c3b87260e9fac32d50afa1ebab2f92af1db" protocol=ttrpc version=3 Jan 23 17:32:34.856000 audit: BPF prog-id=98 op=LOAD Jan 23 17:32:34.858000 audit: BPF prog-id=99 op=LOAD Jan 23 17:32:34.858000 audit[2752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2623 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373037653564363030376663386334343035393663396664376133 Jan 23 17:32:34.858000 audit: BPF prog-id=99 op=UNLOAD Jan 23 17:32:34.858000 audit[2752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2623 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373037653564363030376663386334343035393663396664376133 Jan 23 17:32:34.858000 audit: BPF prog-id=100 op=LOAD Jan 23 17:32:34.858000 audit[2752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2623 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373037653564363030376663386334343035393663396664376133 Jan 23 17:32:34.858000 audit: BPF prog-id=101 op=LOAD Jan 23 17:32:34.858000 audit[2752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2623 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373037653564363030376663386334343035393663396664376133 Jan 23 17:32:34.858000 audit: BPF prog-id=101 op=UNLOAD Jan 23 17:32:34.858000 audit[2752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2623 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373037653564363030376663386334343035393663396664376133 Jan 23 17:32:34.858000 audit: BPF prog-id=100 op=UNLOAD Jan 23 17:32:34.858000 audit[2752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2623 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373037653564363030376663386334343035393663396664376133 Jan 23 17:32:34.858000 audit: BPF prog-id=102 op=LOAD Jan 23 17:32:34.858000 audit[2752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2623 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161373037653564363030376663386334343035393663396664376133 Jan 23 17:32:34.863000 audit: BPF prog-id=103 op=LOAD Jan 23 17:32:34.865000 audit: BPF prog-id=104 op=LOAD Jan 23 17:32:34.865000 audit[2753]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2654 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376438643735353965303138313134616335323931333562653365 Jan 23 17:32:34.865000 audit: BPF prog-id=104 op=UNLOAD Jan 23 17:32:34.865000 audit[2753]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2654 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376438643735353965303138313134616335323931333562653365 Jan 23 17:32:34.866000 audit: BPF prog-id=105 op=LOAD Jan 23 17:32:34.866000 audit[2753]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2654 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376438643735353965303138313134616335323931333562653365 Jan 23 17:32:34.866000 audit: BPF prog-id=106 op=LOAD Jan 23 17:32:34.866000 audit[2753]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2654 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376438643735353965303138313134616335323931333562653365 Jan 23 17:32:34.866000 audit: BPF prog-id=106 op=UNLOAD Jan 23 17:32:34.866000 audit[2753]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2654 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376438643735353965303138313134616335323931333562653365 Jan 23 17:32:34.866000 audit: BPF prog-id=105 op=UNLOAD Jan 23 17:32:34.866000 audit[2753]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2654 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376438643735353965303138313134616335323931333562653365 Jan 23 17:32:34.866000 audit: BPF prog-id=107 op=LOAD Jan 23 17:32:34.866000 audit[2753]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2654 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376438643735353965303138313134616335323931333562653365 Jan 23 17:32:34.874026 systemd[1]: Started cri-containerd-4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be.scope - libcontainer container 4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be. Jan 23 17:32:34.891000 audit: BPF prog-id=108 op=LOAD Jan 23 17:32:34.892000 audit: BPF prog-id=109 op=LOAD Jan 23 17:32:34.892000 audit[2791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2687 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466383136313832373837373662353364363533616130353533373166 Jan 23 17:32:34.892000 audit: BPF prog-id=109 op=UNLOAD Jan 23 17:32:34.892000 audit[2791]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2687 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466383136313832373837373662353364363533616130353533373166 Jan 23 17:32:34.893000 audit: BPF prog-id=110 op=LOAD Jan 23 17:32:34.893000 audit[2791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2687 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466383136313832373837373662353364363533616130353533373166 Jan 23 17:32:34.893000 audit: BPF prog-id=111 op=LOAD Jan 23 17:32:34.893000 audit[2791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2687 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466383136313832373837373662353364363533616130353533373166 Jan 23 17:32:34.893000 audit: BPF prog-id=111 op=UNLOAD Jan 23 17:32:34.893000 audit[2791]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2687 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466383136313832373837373662353364363533616130353533373166 Jan 23 17:32:34.893000 audit: BPF prog-id=110 op=UNLOAD Jan 23 17:32:34.893000 audit[2791]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2687 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466383136313832373837373662353364363533616130353533373166 Jan 23 17:32:34.893000 audit: BPF prog-id=112 op=LOAD Jan 23 17:32:34.893000 audit[2791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2687 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:34.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466383136313832373837373662353364363533616130353533373166 Jan 23 17:32:34.896061 containerd[1668]: time="2026-01-23T17:32:34.896031830Z" level=info msg="StartContainer for \"aa707e5d6007fc8c440596c9fd7a3d98a453bb57a0b309ef2636bef6a2f597bb\" returns successfully" Jan 23 17:32:34.905816 containerd[1668]: time="2026-01-23T17:32:34.905660637Z" level=info msg="StartContainer for \"607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea\" returns successfully" Jan 23 17:32:34.930036 containerd[1668]: time="2026-01-23T17:32:34.929940676Z" level=info msg="StartContainer for \"4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be\" returns successfully" Jan 23 17:32:34.992033 kubelet[2577]: I0123 17:32:34.991860 2577 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:35.249050 kubelet[2577]: E0123 17:32:35.248954 2577 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:35.253517 kubelet[2577]: E0123 17:32:35.253237 2577 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:35.256449 kubelet[2577]: E0123 17:32:35.256423 2577 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.256799 kubelet[2577]: E0123 17:32:36.256767 2577 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.257117 kubelet[2577]: E0123 17:32:36.256811 2577 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.393827 kubelet[2577]: E0123 17:32:36.393782 2577 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-1-0-4-2c8b61c80e\" not found" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.542565 kubelet[2577]: I0123 17:32:36.542448 2577 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.542565 kubelet[2577]: E0123 17:32:36.542496 2577 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547-1-0-4-2c8b61c80e\": node \"ci-4547-1-0-4-2c8b61c80e\" not found" Jan 23 17:32:36.613119 kubelet[2577]: I0123 17:32:36.613073 2577 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.620344 kubelet[2577]: E0123 17:32:36.620302 2577 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-1-0-4-2c8b61c80e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.620344 kubelet[2577]: I0123 17:32:36.620341 2577 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.623500 kubelet[2577]: E0123 17:32:36.623461 2577 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.623500 kubelet[2577]: I0123 17:32:36.623497 2577 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:36.626260 kubelet[2577]: E0123 17:32:36.626233 2577 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-1-0-4-2c8b61c80e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:37.202374 kubelet[2577]: I0123 17:32:37.202308 2577 apiserver.go:52] "Watching apiserver" Jan 23 17:32:37.212964 kubelet[2577]: I0123 17:32:37.212881 2577 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:32:37.258264 kubelet[2577]: I0123 17:32:37.258200 2577 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:37.260126 kubelet[2577]: E0123 17:32:37.260064 2577 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-1-0-4-2c8b61c80e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:38.398637 systemd[1]: Reload requested from client PID 2860 ('systemctl') (unit session-12.scope)... Jan 23 17:32:38.398654 systemd[1]: Reloading... Jan 23 17:32:38.474801 zram_generator::config[2906]: No configuration found. Jan 23 17:32:38.655956 systemd[1]: Reloading finished in 257 ms. Jan 23 17:32:38.686070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:38.705911 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:32:38.706222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:38.707848 kernel: kauditd_printk_skb: 205 callbacks suppressed Jan 23 17:32:38.707898 kernel: audit: type=1131 audit(1769189558.705:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:38.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:38.706297 systemd[1]: kubelet.service: Consumed 1.321s CPU time, 127.9M memory peak. Jan 23 17:32:38.708173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:38.708000 audit: BPF prog-id=113 op=LOAD Jan 23 17:32:38.708000 audit: BPF prog-id=80 op=UNLOAD Jan 23 17:32:38.710246 kernel: audit: type=1334 audit(1769189558.708:398): prog-id=113 op=LOAD Jan 23 17:32:38.710284 kernel: audit: type=1334 audit(1769189558.708:399): prog-id=80 op=UNLOAD Jan 23 17:32:38.710304 kernel: audit: type=1334 audit(1769189558.708:400): prog-id=114 op=LOAD Jan 23 17:32:38.708000 audit: BPF prog-id=114 op=LOAD Jan 23 17:32:38.710916 kernel: audit: type=1334 audit(1769189558.710:401): prog-id=115 op=LOAD Jan 23 17:32:38.710000 audit: BPF prog-id=115 op=LOAD Jan 23 17:32:38.710000 audit: BPF prog-id=81 op=UNLOAD Jan 23 17:32:38.712251 kernel: audit: type=1334 audit(1769189558.710:402): prog-id=81 op=UNLOAD Jan 23 17:32:38.712292 kernel: audit: type=1334 audit(1769189558.710:403): prog-id=82 op=UNLOAD Jan 23 17:32:38.712315 kernel: audit: type=1334 audit(1769189558.710:404): prog-id=116 op=LOAD Jan 23 17:32:38.712333 kernel: audit: type=1334 audit(1769189558.710:405): prog-id=77 op=UNLOAD Jan 23 17:32:38.710000 audit: BPF prog-id=82 op=UNLOAD Jan 23 17:32:38.710000 audit: BPF prog-id=116 op=LOAD Jan 23 17:32:38.710000 audit: BPF prog-id=77 op=UNLOAD Jan 23 17:32:38.711000 audit: BPF prog-id=117 op=LOAD Jan 23 17:32:38.715048 kernel: audit: type=1334 audit(1769189558.711:406): prog-id=117 op=LOAD Jan 23 17:32:38.712000 audit: BPF prog-id=118 op=LOAD Jan 23 17:32:38.712000 audit: BPF prog-id=78 op=UNLOAD Jan 23 17:32:38.712000 audit: BPF prog-id=79 op=UNLOAD Jan 23 17:32:38.724000 audit: BPF prog-id=119 op=LOAD Jan 23 17:32:38.724000 audit: BPF prog-id=70 op=UNLOAD Jan 23 17:32:38.724000 audit: BPF prog-id=120 op=LOAD Jan 23 17:32:38.724000 audit: BPF prog-id=121 op=LOAD Jan 23 17:32:38.724000 audit: BPF prog-id=71 op=UNLOAD Jan 23 17:32:38.724000 audit: BPF prog-id=72 op=UNLOAD Jan 23 17:32:38.725000 audit: BPF prog-id=122 op=LOAD Jan 23 17:32:38.725000 audit: BPF prog-id=73 op=UNLOAD Jan 23 17:32:38.725000 audit: BPF prog-id=123 op=LOAD Jan 23 17:32:38.725000 audit: BPF prog-id=124 op=LOAD Jan 23 17:32:38.725000 audit: BPF prog-id=74 op=UNLOAD Jan 23 17:32:38.725000 audit: BPF prog-id=75 op=UNLOAD Jan 23 17:32:38.727000 audit: BPF prog-id=125 op=LOAD Jan 23 17:32:38.727000 audit: BPF prog-id=76 op=UNLOAD Jan 23 17:32:38.727000 audit: BPF prog-id=126 op=LOAD Jan 23 17:32:38.727000 audit: BPF prog-id=63 op=UNLOAD Jan 23 17:32:38.729000 audit: BPF prog-id=127 op=LOAD Jan 23 17:32:38.729000 audit: BPF prog-id=67 op=UNLOAD Jan 23 17:32:38.729000 audit: BPF prog-id=128 op=LOAD Jan 23 17:32:38.729000 audit: BPF prog-id=129 op=LOAD Jan 23 17:32:38.729000 audit: BPF prog-id=68 op=UNLOAD Jan 23 17:32:38.729000 audit: BPF prog-id=69 op=UNLOAD Jan 23 17:32:38.729000 audit: BPF prog-id=130 op=LOAD Jan 23 17:32:38.729000 audit: BPF prog-id=64 op=UNLOAD Jan 23 17:32:38.729000 audit: BPF prog-id=131 op=LOAD Jan 23 17:32:38.729000 audit: BPF prog-id=132 op=LOAD Jan 23 17:32:38.729000 audit: BPF prog-id=65 op=UNLOAD Jan 23 17:32:38.729000 audit: BPF prog-id=66 op=UNLOAD Jan 23 17:32:38.866281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:38.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:38.871353 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:32:38.905190 kubelet[2951]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:32:38.905190 kubelet[2951]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:32:38.905190 kubelet[2951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:32:38.905513 kubelet[2951]: I0123 17:32:38.905220 2951 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:32:38.910547 kubelet[2951]: I0123 17:32:38.910437 2951 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 17:32:38.910547 kubelet[2951]: I0123 17:32:38.910478 2951 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:32:38.910709 kubelet[2951]: I0123 17:32:38.910691 2951 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:32:38.913044 kubelet[2951]: I0123 17:32:38.912681 2951 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 17:32:38.915353 kubelet[2951]: I0123 17:32:38.915331 2951 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:32:38.918501 kubelet[2951]: I0123 17:32:38.918477 2951 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:32:38.921168 kubelet[2951]: I0123 17:32:38.921145 2951 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:32:38.921381 kubelet[2951]: I0123 17:32:38.921360 2951 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:32:38.921522 kubelet[2951]: I0123 17:32:38.921386 2951 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-1-0-4-2c8b61c80e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:32:38.921595 kubelet[2951]: I0123 17:32:38.921532 2951 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:32:38.921595 kubelet[2951]: I0123 17:32:38.921540 2951 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 17:32:38.921595 kubelet[2951]: I0123 17:32:38.921579 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:32:38.921731 kubelet[2951]: I0123 17:32:38.921720 2951 kubelet.go:480] "Attempting to sync node with API server" Jan 23 17:32:38.921776 kubelet[2951]: I0123 17:32:38.921738 2951 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:32:38.921776 kubelet[2951]: I0123 17:32:38.921775 2951 kubelet.go:386] "Adding apiserver pod source" Jan 23 17:32:38.921821 kubelet[2951]: I0123 17:32:38.921789 2951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:32:38.922720 kubelet[2951]: I0123 17:32:38.922680 2951 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 17:32:38.925993 kubelet[2951]: I0123 17:32:38.925770 2951 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:32:38.934064 kubelet[2951]: I0123 17:32:38.931910 2951 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:32:38.934064 kubelet[2951]: I0123 17:32:38.931969 2951 server.go:1289] "Started kubelet" Jan 23 17:32:38.934064 kubelet[2951]: I0123 17:32:38.933387 2951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:32:38.938591 kubelet[2951]: I0123 17:32:38.938549 2951 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:32:38.938665 kubelet[2951]: I0123 17:32:38.938651 2951 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:32:38.938806 kubelet[2951]: I0123 17:32:38.938788 2951 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:32:38.939952 kubelet[2951]: E0123 17:32:38.939916 2951 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-1-0-4-2c8b61c80e\" not found" Jan 23 17:32:38.941099 kubelet[2951]: I0123 17:32:38.941064 2951 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:32:38.942093 kubelet[2951]: I0123 17:32:38.941979 2951 server.go:317] "Adding debug handlers to kubelet server" Jan 23 17:32:38.944543 kubelet[2951]: I0123 17:32:38.944444 2951 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:32:38.944618 kubelet[2951]: I0123 17:32:38.944550 2951 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:32:38.945030 kubelet[2951]: I0123 17:32:38.942055 2951 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 17:32:38.946123 kubelet[2951]: I0123 17:32:38.946102 2951 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 17:32:38.946210 kubelet[2951]: I0123 17:32:38.946200 2951 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 17:32:38.946872 kubelet[2951]: I0123 17:32:38.946846 2951 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:32:38.946974 kubelet[2951]: I0123 17:32:38.946959 2951 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:32:38.947023 kubelet[2951]: I0123 17:32:38.947015 2951 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 17:32:38.947115 kubelet[2951]: E0123 17:32:38.947100 2951 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:32:38.947328 kubelet[2951]: I0123 17:32:38.947253 2951 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:32:38.947634 kubelet[2951]: I0123 17:32:38.947616 2951 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:32:38.947910 kubelet[2951]: I0123 17:32:38.947890 2951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:32:38.951896 kubelet[2951]: E0123 17:32:38.951867 2951 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:32:38.983022 kubelet[2951]: I0123 17:32:38.982179 2951 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:32:38.983022 kubelet[2951]: I0123 17:32:38.982197 2951 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:32:38.983022 kubelet[2951]: I0123 17:32:38.982216 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:32:38.983022 kubelet[2951]: I0123 17:32:38.982351 2951 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:32:38.983022 kubelet[2951]: I0123 17:32:38.982361 2951 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:32:38.983022 kubelet[2951]: I0123 17:32:38.982378 2951 policy_none.go:49] "None policy: Start" Jan 23 17:32:38.983022 kubelet[2951]: I0123 17:32:38.982386 2951 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:32:38.983022 kubelet[2951]: I0123 17:32:38.982394 2951 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:32:38.983022 kubelet[2951]: I0123 17:32:38.982472 2951 state_mem.go:75] "Updated machine memory state" Jan 23 17:32:38.989949 kubelet[2951]: E0123 17:32:38.989916 2951 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:32:38.990318 kubelet[2951]: I0123 17:32:38.990275 2951 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:32:38.990443 kubelet[2951]: I0123 17:32:38.990408 2951 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:32:38.990735 kubelet[2951]: I0123 17:32:38.990706 2951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:32:38.992483 kubelet[2951]: E0123 17:32:38.992462 2951 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:32:39.048421 kubelet[2951]: I0123 17:32:39.048390 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.048814 kubelet[2951]: I0123 17:32:39.048575 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.048884 kubelet[2951]: I0123 17:32:39.048660 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.093153 kubelet[2951]: I0123 17:32:39.093121 2951 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.100048 kubelet[2951]: I0123 17:32:39.100005 2951 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.100165 kubelet[2951]: I0123 17:32:39.100082 2951 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.140261 kubelet[2951]: I0123 17:32:39.139862 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea521ad569d1899152a9c3f02bf4e664-kubeconfig\") pod \"kube-scheduler-ci-4547-1-0-4-2c8b61c80e\" (UID: \"ea521ad569d1899152a9c3f02bf4e664\") " pod="kube-system/kube-scheduler-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.140261 kubelet[2951]: I0123 17:32:39.139907 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f59a9cf0afcd6832e11ebb2127f6bf74-ca-certs\") pod \"kube-apiserver-ci-4547-1-0-4-2c8b61c80e\" (UID: \"f59a9cf0afcd6832e11ebb2127f6bf74\") " pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.140261 kubelet[2951]: I0123 17:32:39.139928 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-k8s-certs\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.140261 kubelet[2951]: I0123 17:32:39.139945 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-kubeconfig\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.140261 kubelet[2951]: I0123 17:32:39.139962 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f59a9cf0afcd6832e11ebb2127f6bf74-k8s-certs\") pod \"kube-apiserver-ci-4547-1-0-4-2c8b61c80e\" (UID: \"f59a9cf0afcd6832e11ebb2127f6bf74\") " pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.140585 kubelet[2951]: I0123 17:32:39.140051 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f59a9cf0afcd6832e11ebb2127f6bf74-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-1-0-4-2c8b61c80e\" (UID: \"f59a9cf0afcd6832e11ebb2127f6bf74\") " pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.140585 kubelet[2951]: I0123 17:32:39.140117 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-ca-certs\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.140585 kubelet[2951]: I0123 17:32:39.140132 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.140585 kubelet[2951]: I0123 17:32:39.140180 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7821995b38876e88410ed7b6267609f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-1-0-4-2c8b61c80e\" (UID: \"e7821995b38876e88410ed7b6267609f\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:39.922991 kubelet[2951]: I0123 17:32:39.922887 2951 apiserver.go:52] "Watching apiserver" Jan 23 17:32:39.939738 kubelet[2951]: I0123 17:32:39.939693 2951 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:32:39.964363 kubelet[2951]: I0123 17:32:39.964324 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:40.262282 kubelet[2951]: E0123 17:32:39.970178 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-1-0-4-2c8b61c80e\" already exists" pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" Jan 23 17:32:40.262282 kubelet[2951]: I0123 17:32:39.993318 2951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-1-0-4-2c8b61c80e" podStartSLOduration=0.993301154 podStartE2EDuration="993.301154ms" podCreationTimestamp="2026-01-23 17:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:39.98436051 +0000 UTC m=+1.109666044" watchObservedRunningTime="2026-01-23 17:32:39.993301154 +0000 UTC m=+1.118606688" Jan 23 17:32:40.262282 kubelet[2951]: I0123 17:32:40.002177 2951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-1-0-4-2c8b61c80e" podStartSLOduration=1.002158718 podStartE2EDuration="1.002158718s" podCreationTimestamp="2026-01-23 17:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:40.001806756 +0000 UTC m=+1.127112290" watchObservedRunningTime="2026-01-23 17:32:40.002158718 +0000 UTC m=+1.127464252" Jan 23 17:32:40.262282 kubelet[2951]: I0123 17:32:40.002299 2951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-1-0-4-2c8b61c80e" podStartSLOduration=1.002295038 podStartE2EDuration="1.002295038s" podCreationTimestamp="2026-01-23 17:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:39.993038993 +0000 UTC m=+1.118344567" watchObservedRunningTime="2026-01-23 17:32:40.002295038 +0000 UTC m=+1.127600572" Jan 23 17:32:42.706589 kubelet[2951]: I0123 17:32:42.706547 2951 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:32:42.706952 containerd[1668]: time="2026-01-23T17:32:42.706913905Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:32:42.707195 kubelet[2951]: I0123 17:32:42.707148 2951 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:32:43.516087 systemd[1]: Created slice kubepods-besteffort-pod16fefb3f_8559_4801_bc48_d46f84f4e8e3.slice - libcontainer container kubepods-besteffort-pod16fefb3f_8559_4801_bc48_d46f84f4e8e3.slice. Jan 23 17:32:43.572407 kubelet[2951]: I0123 17:32:43.572330 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/16fefb3f-8559-4801-bc48-d46f84f4e8e3-kube-proxy\") pod \"kube-proxy-q7pbb\" (UID: \"16fefb3f-8559-4801-bc48-d46f84f4e8e3\") " pod="kube-system/kube-proxy-q7pbb" Jan 23 17:32:43.572407 kubelet[2951]: I0123 17:32:43.572400 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16fefb3f-8559-4801-bc48-d46f84f4e8e3-lib-modules\") pod \"kube-proxy-q7pbb\" (UID: \"16fefb3f-8559-4801-bc48-d46f84f4e8e3\") " pod="kube-system/kube-proxy-q7pbb" Jan 23 17:32:43.572579 kubelet[2951]: I0123 17:32:43.572429 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57plx\" (UniqueName: \"kubernetes.io/projected/16fefb3f-8559-4801-bc48-d46f84f4e8e3-kube-api-access-57plx\") pod \"kube-proxy-q7pbb\" (UID: \"16fefb3f-8559-4801-bc48-d46f84f4e8e3\") " pod="kube-system/kube-proxy-q7pbb" Jan 23 17:32:43.572579 kubelet[2951]: I0123 17:32:43.572481 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16fefb3f-8559-4801-bc48-d46f84f4e8e3-xtables-lock\") pod \"kube-proxy-q7pbb\" (UID: \"16fefb3f-8559-4801-bc48-d46f84f4e8e3\") " pod="kube-system/kube-proxy-q7pbb" Jan 23 17:32:43.828465 containerd[1668]: time="2026-01-23T17:32:43.828308966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q7pbb,Uid:16fefb3f-8559-4801-bc48-d46f84f4e8e3,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:43.863236 containerd[1668]: time="2026-01-23T17:32:43.863176097Z" level=info msg="connecting to shim af373451d92bd59992d2ff85e40c2877538dff383d09e559c5c293bc13bee4d4" address="unix:///run/containerd/s/555f5a30f9f0071f779e94acfc82a05baebb654630b7e360ff8601ae700a9128" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:43.887984 systemd[1]: Started cri-containerd-af373451d92bd59992d2ff85e40c2877538dff383d09e559c5c293bc13bee4d4.scope - libcontainer container af373451d92bd59992d2ff85e40c2877538dff383d09e559c5c293bc13bee4d4. Jan 23 17:32:43.895000 audit: BPF prog-id=133 op=LOAD Jan 23 17:32:43.897104 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 23 17:32:43.897140 kernel: audit: type=1334 audit(1769189563.895:439): prog-id=133 op=LOAD Jan 23 17:32:43.897000 audit: BPF prog-id=134 op=LOAD Jan 23 17:32:43.897000 audit[3023]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.901574 kernel: audit: type=1334 audit(1769189563.897:440): prog-id=134 op=LOAD Jan 23 17:32:43.901609 kernel: audit: type=1300 audit(1769189563.897:440): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.901626 kernel: audit: type=1327 audit(1769189563.897:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.897000 audit: BPF prog-id=134 op=UNLOAD Jan 23 17:32:43.905177 kernel: audit: type=1334 audit(1769189563.897:441): prog-id=134 op=UNLOAD Jan 23 17:32:43.897000 audit[3023]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.908019 kernel: audit: type=1300 audit(1769189563.897:441): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.908083 kernel: audit: type=1327 audit(1769189563.897:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.897000 audit: BPF prog-id=135 op=LOAD Jan 23 17:32:43.897000 audit[3023]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.914340 kernel: audit: type=1334 audit(1769189563.897:442): prog-id=135 op=LOAD Jan 23 17:32:43.914447 kernel: audit: type=1300 audit(1769189563.897:442): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.914532 kernel: audit: type=1327 audit(1769189563.897:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.897000 audit: BPF prog-id=136 op=LOAD Jan 23 17:32:43.897000 audit[3023]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.900000 audit: BPF prog-id=136 op=UNLOAD Jan 23 17:32:43.900000 audit[3023]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.900000 audit: BPF prog-id=135 op=UNLOAD Jan 23 17:32:43.900000 audit[3023]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.900000 audit: BPF prog-id=137 op=LOAD Jan 23 17:32:43.900000 audit[3023]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3011 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333733343531643932626435393939326432666638356534306332 Jan 23 17:32:43.927744 containerd[1668]: time="2026-01-23T17:32:43.927706334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q7pbb,Uid:16fefb3f-8559-4801-bc48-d46f84f4e8e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"af373451d92bd59992d2ff85e40c2877538dff383d09e559c5c293bc13bee4d4\"" Jan 23 17:32:43.933953 containerd[1668]: time="2026-01-23T17:32:43.933892484Z" level=info msg="CreateContainer within sandbox \"af373451d92bd59992d2ff85e40c2877538dff383d09e559c5c293bc13bee4d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:32:43.949572 containerd[1668]: time="2026-01-23T17:32:43.948388555Z" level=info msg="Container 0b8ff8bbd69fb3fbd3e1125adf26204e6fd1ffa2bad6fd5e890c4f7cf68392e0: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:43.963561 containerd[1668]: time="2026-01-23T17:32:43.963512629Z" level=info msg="CreateContainer within sandbox \"af373451d92bd59992d2ff85e40c2877538dff383d09e559c5c293bc13bee4d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b8ff8bbd69fb3fbd3e1125adf26204e6fd1ffa2bad6fd5e890c4f7cf68392e0\"" Jan 23 17:32:43.965779 containerd[1668]: time="2026-01-23T17:32:43.964938156Z" level=info msg="StartContainer for \"0b8ff8bbd69fb3fbd3e1125adf26204e6fd1ffa2bad6fd5e890c4f7cf68392e0\"" Jan 23 17:32:43.968120 containerd[1668]: time="2026-01-23T17:32:43.968086292Z" level=info msg="connecting to shim 0b8ff8bbd69fb3fbd3e1125adf26204e6fd1ffa2bad6fd5e890c4f7cf68392e0" address="unix:///run/containerd/s/555f5a30f9f0071f779e94acfc82a05baebb654630b7e360ff8601ae700a9128" protocol=ttrpc version=3 Jan 23 17:32:44.001656 systemd[1]: Started cri-containerd-0b8ff8bbd69fb3fbd3e1125adf26204e6fd1ffa2bad6fd5e890c4f7cf68392e0.scope - libcontainer container 0b8ff8bbd69fb3fbd3e1125adf26204e6fd1ffa2bad6fd5e890c4f7cf68392e0. Jan 23 17:32:44.002402 systemd[1]: Created slice kubepods-besteffort-pod3587a1ff_cfc0_4e01_a005_0fba08a46a37.slice - libcontainer container kubepods-besteffort-pod3587a1ff_cfc0_4e01_a005_0fba08a46a37.slice. Jan 23 17:32:44.070000 audit: BPF prog-id=138 op=LOAD Jan 23 17:32:44.070000 audit[3049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3011 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386666386262643639666233666264336531313235616466323632 Jan 23 17:32:44.070000 audit: BPF prog-id=139 op=LOAD Jan 23 17:32:44.070000 audit[3049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3011 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386666386262643639666233666264336531313235616466323632 Jan 23 17:32:44.070000 audit: BPF prog-id=139 op=UNLOAD Jan 23 17:32:44.070000 audit[3049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3011 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386666386262643639666233666264336531313235616466323632 Jan 23 17:32:44.070000 audit: BPF prog-id=138 op=UNLOAD Jan 23 17:32:44.070000 audit[3049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3011 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386666386262643639666233666264336531313235616466323632 Jan 23 17:32:44.070000 audit: BPF prog-id=140 op=LOAD Jan 23 17:32:44.070000 audit[3049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3011 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386666386262643639666233666264336531313235616466323632 Jan 23 17:32:44.076672 kubelet[2951]: I0123 17:32:44.076632 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3587a1ff-cfc0-4e01-a005-0fba08a46a37-var-lib-calico\") pod \"tigera-operator-7dcd859c48-p42jc\" (UID: \"3587a1ff-cfc0-4e01-a005-0fba08a46a37\") " pod="tigera-operator/tigera-operator-7dcd859c48-p42jc" Jan 23 17:32:44.076672 kubelet[2951]: I0123 17:32:44.076673 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmm5\" (UniqueName: \"kubernetes.io/projected/3587a1ff-cfc0-4e01-a005-0fba08a46a37-kube-api-access-blmm5\") pod \"tigera-operator-7dcd859c48-p42jc\" (UID: \"3587a1ff-cfc0-4e01-a005-0fba08a46a37\") " pod="tigera-operator/tigera-operator-7dcd859c48-p42jc" Jan 23 17:32:44.093642 containerd[1668]: time="2026-01-23T17:32:44.092718983Z" level=info msg="StartContainer for \"0b8ff8bbd69fb3fbd3e1125adf26204e6fd1ffa2bad6fd5e890c4f7cf68392e0\" returns successfully" Jan 23 17:32:44.239000 audit[3116]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.239000 audit[3116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc449790 a2=0 a3=1 items=0 ppid=3062 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 17:32:44.240000 audit[3117]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.240000 audit[3117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5940210 a2=0 a3=1 items=0 ppid=3062 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 17:32:44.241000 audit[3119]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.241000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0b9e450 a2=0 a3=1 items=0 ppid=3062 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.241000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 17:32:44.242000 audit[3120]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.243000 audit[3121]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.243000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3502e70 a2=0 a3=1 items=0 ppid=3062 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 17:32:44.242000 audit[3120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2ba4c80 a2=0 a3=1 items=0 ppid=3062 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 17:32:44.246000 audit[3124]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.246000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe61cc620 a2=0 a3=1 items=0 ppid=3062 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 17:32:44.309787 containerd[1668]: time="2026-01-23T17:32:44.309576207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p42jc,Uid:3587a1ff-cfc0-4e01-a005-0fba08a46a37,Namespace:tigera-operator,Attempt:0,}" Jan 23 17:32:44.326553 containerd[1668]: time="2026-01-23T17:32:44.326509810Z" level=info msg="connecting to shim 3e15772bbbb00a65e922a4e6722f642c8569c6d92df1a13b256c2063175ffe70" address="unix:///run/containerd/s/84765a42794d95cafb5d6a7f2d78d655b901520979a81ccbec6e50b57cfb2435" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:44.342000 audit[3159]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.342000 audit[3159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff62ff5c0 a2=0 a3=1 items=0 ppid=3062 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 17:32:44.345000 audit[3161]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.345000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffc6e43c0 a2=0 a3=1 items=0 ppid=3062 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 23 17:32:44.346960 systemd[1]: Started cri-containerd-3e15772bbbb00a65e922a4e6722f642c8569c6d92df1a13b256c2063175ffe70.scope - libcontainer container 3e15772bbbb00a65e922a4e6722f642c8569c6d92df1a13b256c2063175ffe70. Jan 23 17:32:44.349000 audit[3166]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.349000 audit[3166]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd0f92e80 a2=0 a3=1 items=0 ppid=3062 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 23 17:32:44.350000 audit[3172]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.350000 audit[3172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3505860 a2=0 a3=1 items=0 ppid=3062 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 17:32:44.354000 audit[3174]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.354000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd8f7d080 a2=0 a3=1 items=0 ppid=3062 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 17:32:44.355000 audit[3175]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.355000 audit[3175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3baddb0 a2=0 a3=1 items=0 ppid=3062 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 17:32:44.357000 audit: BPF prog-id=141 op=LOAD Jan 23 17:32:44.357000 audit: BPF prog-id=142 op=LOAD Jan 23 17:32:44.357000 audit[3147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3135 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365313537373262626262303061363565393232613465363732326636 Jan 23 17:32:44.357000 audit: BPF prog-id=142 op=UNLOAD Jan 23 17:32:44.357000 audit[3147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365313537373262626262303061363565393232613465363732326636 Jan 23 17:32:44.357000 audit: BPF prog-id=143 op=LOAD Jan 23 17:32:44.357000 audit[3147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3135 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365313537373262626262303061363565393232613465363732326636 Jan 23 17:32:44.357000 audit: BPF prog-id=144 op=LOAD Jan 23 17:32:44.357000 audit[3147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3135 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365313537373262626262303061363565393232613465363732326636 Jan 23 17:32:44.357000 audit: BPF prog-id=144 op=UNLOAD Jan 23 17:32:44.357000 audit[3147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365313537373262626262303061363565393232613465363732326636 Jan 23 17:32:44.357000 audit: BPF prog-id=143 op=UNLOAD Jan 23 17:32:44.357000 audit[3147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365313537373262626262303061363565393232613465363732326636 Jan 23 17:32:44.357000 audit: BPF prog-id=145 op=LOAD Jan 23 17:32:44.357000 audit[3147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3135 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365313537373262626262303061363565393232613465363732326636 Jan 23 17:32:44.358000 audit[3177]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.358000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc02da9f0 a2=0 a3=1 items=0 ppid=3062 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 17:32:44.362000 audit[3180]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.362000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffecb07960 a2=0 a3=1 items=0 ppid=3062 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 23 17:32:44.364000 audit[3181]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.364000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc42ee870 a2=0 a3=1 items=0 ppid=3062 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.364000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 17:32:44.366000 audit[3183]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.366000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd2e4a960 a2=0 a3=1 items=0 ppid=3062 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 17:32:44.367000 audit[3184]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.367000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4412440 a2=0 a3=1 items=0 ppid=3062 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.367000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 17:32:44.371000 audit[3186]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.371000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd7cfe870 a2=0 a3=1 items=0 ppid=3062 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 17:32:44.374000 audit[3190]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.374000 audit[3190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff0c11820 a2=0 a3=1 items=0 ppid=3062 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.374000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 17:32:44.377000 audit[3198]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.377000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffd2340b0 a2=0 a3=1 items=0 ppid=3062 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.377000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 17:32:44.378000 audit[3199]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.378000 audit[3199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc2b2d860 a2=0 a3=1 items=0 ppid=3062 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 17:32:44.381000 audit[3201]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.381000 audit[3201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd750a0d0 a2=0 a3=1 items=0 ppid=3062 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:44.385272 containerd[1668]: time="2026-01-23T17:32:44.385225578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p42jc,Uid:3587a1ff-cfc0-4e01-a005-0fba08a46a37,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3e15772bbbb00a65e922a4e6722f642c8569c6d92df1a13b256c2063175ffe70\"" Jan 23 17:32:44.387425 containerd[1668]: time="2026-01-23T17:32:44.387372029Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 17:32:44.387000 audit[3204]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.387000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff79baf70 a2=0 a3=1 items=0 ppid=3062 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.387000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:44.389000 audit[3205]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.389000 audit[3205]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc916c540 a2=0 a3=1 items=0 ppid=3062 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.389000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 17:32:44.392000 audit[3207]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:44.392000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffeb73d670 a2=0 a3=1 items=0 ppid=3062 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 17:32:44.411000 audit[3213]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:44.411000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff73e2420 a2=0 a3=1 items=0 ppid=3062 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:44.422000 audit[3213]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:44.422000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff73e2420 a2=0 a3=1 items=0 ppid=3062 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.422000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:44.423000 audit[3218]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.423000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc3721780 a2=0 a3=1 items=0 ppid=3062 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 17:32:44.426000 audit[3220]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3220 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.426000 audit[3220]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffff7b4760 a2=0 a3=1 items=0 ppid=3062 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 23 17:32:44.430000 audit[3223]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.430000 audit[3223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffec038260 a2=0 a3=1 items=0 ppid=3062 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 23 17:32:44.431000 audit[3224]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.431000 audit[3224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe82993f0 a2=0 a3=1 items=0 ppid=3062 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 17:32:44.433000 audit[3226]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.433000 audit[3226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd563ae20 a2=0 a3=1 items=0 ppid=3062 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 17:32:44.434000 audit[3227]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.434000 audit[3227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefa82d40 a2=0 a3=1 items=0 ppid=3062 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 17:32:44.436000 audit[3229]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.436000 audit[3229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc20ceca0 a2=0 a3=1 items=0 ppid=3062 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 23 17:32:44.440000 audit[3232]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.440000 audit[3232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe4441040 a2=0 a3=1 items=0 ppid=3062 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 17:32:44.441000 audit[3233]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.441000 audit[3233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffece8fe0 a2=0 a3=1 items=0 ppid=3062 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 17:32:44.443000 audit[3235]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.443000 audit[3235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffedf9eab0 a2=0 a3=1 items=0 ppid=3062 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 17:32:44.444000 audit[3236]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.444000 audit[3236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd884c240 a2=0 a3=1 items=0 ppid=3062 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 17:32:44.447000 audit[3238]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.447000 audit[3238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffddc43f60 a2=0 a3=1 items=0 ppid=3062 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.447000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 17:32:44.451000 audit[3241]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.451000 audit[3241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb282790 a2=0 a3=1 items=0 ppid=3062 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 17:32:44.455000 audit[3244]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.455000 audit[3244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff59a61a0 a2=0 a3=1 items=0 ppid=3062 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 23 17:32:44.456000 audit[3245]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.456000 audit[3245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffda186130 a2=0 a3=1 items=0 ppid=3062 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.456000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 17:32:44.458000 audit[3247]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.458000 audit[3247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc29b4df0 a2=0 a3=1 items=0 ppid=3062 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:44.461000 audit[3250]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.461000 audit[3250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffde3073c0 a2=0 a3=1 items=0 ppid=3062 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:44.462000 audit[3251]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.462000 audit[3251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd0f3630 a2=0 a3=1 items=0 ppid=3062 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 17:32:44.465000 audit[3253]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.465000 audit[3253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd77e03b0 a2=0 a3=1 items=0 ppid=3062 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.465000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 17:32:44.466000 audit[3254]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.466000 audit[3254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5904d00 a2=0 a3=1 items=0 ppid=3062 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 17:32:44.468000 audit[3256]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.468000 audit[3256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc2e9e7e0 a2=0 a3=1 items=0 ppid=3062 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:32:44.472000 audit[3259]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:44.472000 audit[3259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffef2aa000 a2=0 a3=1 items=0 ppid=3062 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:32:44.475000 audit[3261]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 17:32:44.475000 audit[3261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffde7fc500 a2=0 a3=1 items=0 ppid=3062 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.475000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:44.475000 audit[3261]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 17:32:44.475000 audit[3261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffde7fc500 a2=0 a3=1 items=0 ppid=3062 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:44.475000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:44.989654 kubelet[2951]: I0123 17:32:44.989288 2951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q7pbb" podStartSLOduration=1.9892711410000001 podStartE2EDuration="1.989271141s" podCreationTimestamp="2026-01-23 17:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:44.98907546 +0000 UTC m=+6.114380954" watchObservedRunningTime="2026-01-23 17:32:44.989271141 +0000 UTC m=+6.114576635" Jan 23 17:32:45.678243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836159423.mount: Deactivated successfully. Jan 23 17:32:46.285454 containerd[1668]: time="2026-01-23T17:32:46.285396939Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:46.286693 containerd[1668]: time="2026-01-23T17:32:46.286456264Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 23 17:32:46.287364 containerd[1668]: time="2026-01-23T17:32:46.287338068Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:46.289554 containerd[1668]: time="2026-01-23T17:32:46.289507159Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:46.290156 containerd[1668]: time="2026-01-23T17:32:46.290121762Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.902714213s" Jan 23 17:32:46.290156 containerd[1668]: time="2026-01-23T17:32:46.290153162Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 17:32:46.294189 containerd[1668]: time="2026-01-23T17:32:46.294163342Z" level=info msg="CreateContainer within sandbox \"3e15772bbbb00a65e922a4e6722f642c8569c6d92df1a13b256c2063175ffe70\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 17:32:46.302302 containerd[1668]: time="2026-01-23T17:32:46.302224501Z" level=info msg="Container 19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:46.310740 containerd[1668]: time="2026-01-23T17:32:46.310641343Z" level=info msg="CreateContainer within sandbox \"3e15772bbbb00a65e922a4e6722f642c8569c6d92df1a13b256c2063175ffe70\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c\"" Jan 23 17:32:46.311263 containerd[1668]: time="2026-01-23T17:32:46.311239066Z" level=info msg="StartContainer for \"19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c\"" Jan 23 17:32:46.312174 containerd[1668]: time="2026-01-23T17:32:46.312142790Z" level=info msg="connecting to shim 19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c" address="unix:///run/containerd/s/84765a42794d95cafb5d6a7f2d78d655b901520979a81ccbec6e50b57cfb2435" protocol=ttrpc version=3 Jan 23 17:32:46.338141 systemd[1]: Started cri-containerd-19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c.scope - libcontainer container 19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c. Jan 23 17:32:46.346000 audit: BPF prog-id=146 op=LOAD Jan 23 17:32:46.347000 audit: BPF prog-id=147 op=LOAD Jan 23 17:32:46.347000 audit[3270]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3135 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139646564616463356333356533343134613439366566373462316538 Jan 23 17:32:46.347000 audit: BPF prog-id=147 op=UNLOAD Jan 23 17:32:46.347000 audit[3270]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139646564616463356333356533343134613439366566373462316538 Jan 23 17:32:46.347000 audit: BPF prog-id=148 op=LOAD Jan 23 17:32:46.347000 audit[3270]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3135 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139646564616463356333356533343134613439366566373462316538 Jan 23 17:32:46.348000 audit: BPF prog-id=149 op=LOAD Jan 23 17:32:46.348000 audit[3270]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3135 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139646564616463356333356533343134613439366566373462316538 Jan 23 17:32:46.348000 audit: BPF prog-id=149 op=UNLOAD Jan 23 17:32:46.348000 audit[3270]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139646564616463356333356533343134613439366566373462316538 Jan 23 17:32:46.348000 audit: BPF prog-id=148 op=UNLOAD Jan 23 17:32:46.348000 audit[3270]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139646564616463356333356533343134613439366566373462316538 Jan 23 17:32:46.348000 audit: BPF prog-id=150 op=LOAD Jan 23 17:32:46.348000 audit[3270]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3135 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139646564616463356333356533343134613439366566373462316538 Jan 23 17:32:46.365078 containerd[1668]: time="2026-01-23T17:32:46.363808244Z" level=info msg="StartContainer for \"19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c\" returns successfully" Jan 23 17:32:51.611269 sudo[1979]: pam_unix(sudo:session): session closed for user root Jan 23 17:32:51.610000 audit[1979]: USER_END pid=1979 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:51.612048 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 23 17:32:51.612093 kernel: audit: type=1106 audit(1769189571.610:519): pid=1979 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:51.614000 audit[1979]: CRED_DISP pid=1979 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:51.615878 kernel: audit: type=1104 audit(1769189571.614:520): pid=1979 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:51.709840 sshd[1978]: Connection closed by 4.153.228.146 port 58480 Jan 23 17:32:51.710137 sshd-session[1974]: pam_unix(sshd:session): session closed for user core Jan 23 17:32:51.710000 audit[1974]: USER_END pid=1974 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:51.714000 audit[1974]: CRED_DISP pid=1974 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:51.718071 systemd[1]: sshd@10-10.0.6.147:22-4.153.228.146:58480.service: Deactivated successfully. Jan 23 17:32:51.720232 kernel: audit: type=1106 audit(1769189571.710:521): pid=1974 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:51.720265 kernel: audit: type=1104 audit(1769189571.714:522): pid=1974 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:32:51.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.6.147:22-4.153.228.146:58480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:51.723076 kernel: audit: type=1131 audit(1769189571.719:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.6.147:22-4.153.228.146:58480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:51.723066 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 17:32:51.725024 systemd[1]: session-12.scope: Consumed 7.406s CPU time, 222.7M memory peak. Jan 23 17:32:51.726693 systemd-logind[1643]: Session 12 logged out. Waiting for processes to exit. Jan 23 17:32:51.728445 systemd-logind[1643]: Removed session 12. Jan 23 17:32:52.576000 audit[3363]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:52.580803 kernel: audit: type=1325 audit(1769189572.576:524): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:52.576000 audit[3363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd5b03740 a2=0 a3=1 items=0 ppid=3062 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:52.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:52.586774 kernel: audit: type=1300 audit(1769189572.576:524): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd5b03740 a2=0 a3=1 items=0 ppid=3062 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:52.588982 kernel: audit: type=1327 audit(1769189572.576:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:52.590000 audit[3363]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:52.590000 audit[3363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd5b03740 a2=0 a3=1 items=0 ppid=3062 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:52.597289 kernel: audit: type=1325 audit(1769189572.590:525): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:52.597352 kernel: audit: type=1300 audit(1769189572.590:525): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd5b03740 a2=0 a3=1 items=0 ppid=3062 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:52.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:52.605000 audit[3365]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:52.605000 audit[3365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcc5e0300 a2=0 a3=1 items=0 ppid=3062 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:52.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:52.609000 audit[3365]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:52.609000 audit[3365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcc5e0300 a2=0 a3=1 items=0 ppid=3062 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:52.609000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:53.378784 kubelet[2951]: I0123 17:32:53.378716 2951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-p42jc" podStartSLOduration=8.474738794 podStartE2EDuration="10.378671454s" podCreationTimestamp="2026-01-23 17:32:43 +0000 UTC" firstStartedPulling="2026-01-23 17:32:44.387109347 +0000 UTC m=+5.512414841" lastFinishedPulling="2026-01-23 17:32:46.291041967 +0000 UTC m=+7.416347501" observedRunningTime="2026-01-23 17:32:46.999817683 +0000 UTC m=+8.125123217" watchObservedRunningTime="2026-01-23 17:32:53.378671454 +0000 UTC m=+14.503976948" Jan 23 17:32:57.794000 audit[3367]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:57.796371 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 23 17:32:57.796478 kernel: audit: type=1325 audit(1769189577.794:528): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:57.794000 audit[3367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc65dcc90 a2=0 a3=1 items=0 ppid=3062 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:57.802182 kernel: audit: type=1300 audit(1769189577.794:528): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc65dcc90 a2=0 a3=1 items=0 ppid=3062 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:57.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:57.804747 kernel: audit: type=1327 audit(1769189577.794:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:57.805000 audit[3367]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:57.808782 kernel: audit: type=1325 audit(1769189577.805:529): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:57.805000 audit[3367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc65dcc90 a2=0 a3=1 items=0 ppid=3062 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:57.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:57.813546 kernel: audit: type=1300 audit(1769189577.805:529): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc65dcc90 a2=0 a3=1 items=0 ppid=3062 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:57.813692 kernel: audit: type=1327 audit(1769189577.805:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:57.831000 audit[3369]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:57.831000 audit[3369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd9fe2860 a2=0 a3=1 items=0 ppid=3062 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:57.837545 kernel: audit: type=1325 audit(1769189577.831:530): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:57.837607 kernel: audit: type=1300 audit(1769189577.831:530): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd9fe2860 a2=0 a3=1 items=0 ppid=3062 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:57.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:57.842224 kernel: audit: type=1327 audit(1769189577.831:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:57.842281 kernel: audit: type=1325 audit(1769189577.838:531): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:57.838000 audit[3369]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:57.838000 audit[3369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd9fe2860 a2=0 a3=1 items=0 ppid=3062 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:57.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:58.850000 audit[3371]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:58.850000 audit[3371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc6415f00 a2=0 a3=1 items=0 ppid=3062 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:58.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:58.859000 audit[3371]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:58.859000 audit[3371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc6415f00 a2=0 a3=1 items=0 ppid=3062 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:58.859000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:59.783172 systemd[1]: Created slice kubepods-besteffort-pod242b8cd4_22e9_462c_bd51_c69400b44569.slice - libcontainer container kubepods-besteffort-pod242b8cd4_22e9_462c_bd51_c69400b44569.slice. Jan 23 17:32:59.807000 audit[3376]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:59.807000 audit[3376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe484d780 a2=0 a3=1 items=0 ppid=3062 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:59.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:59.822000 audit[3376]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:59.822000 audit[3376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe484d780 a2=0 a3=1 items=0 ppid=3062 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:59.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:59.873602 kubelet[2951]: I0123 17:32:59.873445 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/242b8cd4-22e9-462c-bd51-c69400b44569-tigera-ca-bundle\") pod \"calico-typha-5859b4f675-96kln\" (UID: \"242b8cd4-22e9-462c-bd51-c69400b44569\") " pod="calico-system/calico-typha-5859b4f675-96kln" Jan 23 17:32:59.873602 kubelet[2951]: I0123 17:32:59.873496 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8shxf\" (UniqueName: \"kubernetes.io/projected/242b8cd4-22e9-462c-bd51-c69400b44569-kube-api-access-8shxf\") pod \"calico-typha-5859b4f675-96kln\" (UID: \"242b8cd4-22e9-462c-bd51-c69400b44569\") " pod="calico-system/calico-typha-5859b4f675-96kln" Jan 23 17:32:59.873602 kubelet[2951]: I0123 17:32:59.873523 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/242b8cd4-22e9-462c-bd51-c69400b44569-typha-certs\") pod \"calico-typha-5859b4f675-96kln\" (UID: \"242b8cd4-22e9-462c-bd51-c69400b44569\") " pod="calico-system/calico-typha-5859b4f675-96kln" Jan 23 17:33:00.090176 containerd[1668]: time="2026-01-23T17:33:00.090047456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5859b4f675-96kln,Uid:242b8cd4-22e9-462c-bd51-c69400b44569,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:00.121837 systemd[1]: Created slice kubepods-besteffort-pod2bb03569_9810_4339_813e_6388bcfc4a52.slice - libcontainer container kubepods-besteffort-pod2bb03569_9810_4339_813e_6388bcfc4a52.slice. Jan 23 17:33:00.127358 containerd[1668]: time="2026-01-23T17:33:00.127283479Z" level=info msg="connecting to shim 3342ad8139199de2523bf59fd7a965a63c2615632f20bf1a2b2e6e4f812cb2f9" address="unix:///run/containerd/s/1ae6f856035a8fb80c31a0e9e14dea9a60d84d436065813e91dc13762dc03efd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:00.149027 systemd[1]: Started cri-containerd-3342ad8139199de2523bf59fd7a965a63c2615632f20bf1a2b2e6e4f812cb2f9.scope - libcontainer container 3342ad8139199de2523bf59fd7a965a63c2615632f20bf1a2b2e6e4f812cb2f9. Jan 23 17:33:00.158000 audit: BPF prog-id=151 op=LOAD Jan 23 17:33:00.159000 audit: BPF prog-id=152 op=LOAD Jan 23 17:33:00.159000 audit[3398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3387 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333343261643831333931393964653235323362663539666437613936 Jan 23 17:33:00.159000 audit: BPF prog-id=152 op=UNLOAD Jan 23 17:33:00.159000 audit[3398]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333343261643831333931393964653235323362663539666437613936 Jan 23 17:33:00.159000 audit: BPF prog-id=153 op=LOAD Jan 23 17:33:00.159000 audit[3398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3387 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333343261643831333931393964653235323362663539666437613936 Jan 23 17:33:00.159000 audit: BPF prog-id=154 op=LOAD Jan 23 17:33:00.159000 audit[3398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3387 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333343261643831333931393964653235323362663539666437613936 Jan 23 17:33:00.159000 audit: BPF prog-id=154 op=UNLOAD Jan 23 17:33:00.159000 audit[3398]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333343261643831333931393964653235323362663539666437613936 Jan 23 17:33:00.159000 audit: BPF prog-id=153 op=UNLOAD Jan 23 17:33:00.159000 audit[3398]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333343261643831333931393964653235323362663539666437613936 Jan 23 17:33:00.159000 audit: BPF prog-id=155 op=LOAD Jan 23 17:33:00.159000 audit[3398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3387 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333343261643831333931393964653235323362663539666437613936 Jan 23 17:33:00.175996 kubelet[2951]: I0123 17:33:00.175947 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2bb03569-9810-4339-813e-6388bcfc4a52-flexvol-driver-host\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176193 kubelet[2951]: I0123 17:33:00.176096 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bb03569-9810-4339-813e-6388bcfc4a52-tigera-ca-bundle\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176428 kubelet[2951]: I0123 17:33:00.176278 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2bb03569-9810-4339-813e-6388bcfc4a52-var-run-calico\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176428 kubelet[2951]: I0123 17:33:00.176315 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bb03569-9810-4339-813e-6388bcfc4a52-xtables-lock\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176428 kubelet[2951]: I0123 17:33:00.176356 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2bb03569-9810-4339-813e-6388bcfc4a52-node-certs\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176428 kubelet[2951]: I0123 17:33:00.176375 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs6kv\" (UniqueName: \"kubernetes.io/projected/2bb03569-9810-4339-813e-6388bcfc4a52-kube-api-access-hs6kv\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176428 kubelet[2951]: I0123 17:33:00.176395 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2bb03569-9810-4339-813e-6388bcfc4a52-cni-log-dir\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176548 kubelet[2951]: I0123 17:33:00.176410 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2bb03569-9810-4339-813e-6388bcfc4a52-policysync\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176661 kubelet[2951]: I0123 17:33:00.176602 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2bb03569-9810-4339-813e-6388bcfc4a52-cni-bin-dir\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176661 kubelet[2951]: I0123 17:33:00.176636 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2bb03569-9810-4339-813e-6388bcfc4a52-var-lib-calico\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176816 kubelet[2951]: I0123 17:33:00.176745 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bb03569-9810-4339-813e-6388bcfc4a52-lib-modules\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.176816 kubelet[2951]: I0123 17:33:00.176793 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2bb03569-9810-4339-813e-6388bcfc4a52-cni-net-dir\") pod \"calico-node-n9hxr\" (UID: \"2bb03569-9810-4339-813e-6388bcfc4a52\") " pod="calico-system/calico-node-n9hxr" Jan 23 17:33:00.180957 containerd[1668]: time="2026-01-23T17:33:00.180921862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5859b4f675-96kln,Uid:242b8cd4-22e9-462c-bd51-c69400b44569,Namespace:calico-system,Attempt:0,} returns sandbox id \"3342ad8139199de2523bf59fd7a965a63c2615632f20bf1a2b2e6e4f812cb2f9\"" Jan 23 17:33:00.182960 containerd[1668]: time="2026-01-23T17:33:00.182880752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 17:33:00.282411 kubelet[2951]: E0123 17:33:00.281844 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.282411 kubelet[2951]: W0123 17:33:00.281870 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.282411 kubelet[2951]: E0123 17:33:00.281891 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.282411 kubelet[2951]: E0123 17:33:00.282292 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.282411 kubelet[2951]: W0123 17:33:00.282305 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.282411 kubelet[2951]: E0123 17:33:00.282317 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.297799 kubelet[2951]: E0123 17:33:00.297551 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.297799 kubelet[2951]: W0123 17:33:00.297574 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.297799 kubelet[2951]: E0123 17:33:00.297593 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.305286 kubelet[2951]: E0123 17:33:00.305135 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:00.369186 kubelet[2951]: E0123 17:33:00.369088 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.369186 kubelet[2951]: W0123 17:33:00.369117 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.369186 kubelet[2951]: E0123 17:33:00.369138 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.369359 kubelet[2951]: E0123 17:33:00.369338 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.369391 kubelet[2951]: W0123 17:33:00.369352 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.369391 kubelet[2951]: E0123 17:33:00.369389 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.371262 kubelet[2951]: E0123 17:33:00.371244 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.371305 kubelet[2951]: W0123 17:33:00.371261 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.371305 kubelet[2951]: E0123 17:33:00.371275 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.371484 kubelet[2951]: E0123 17:33:00.371469 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.371484 kubelet[2951]: W0123 17:33:00.371482 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.371542 kubelet[2951]: E0123 17:33:00.371491 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.371687 kubelet[2951]: E0123 17:33:00.371674 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.371687 kubelet[2951]: W0123 17:33:00.371686 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.371769 kubelet[2951]: E0123 17:33:00.371696 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.371908 kubelet[2951]: E0123 17:33:00.371894 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.371908 kubelet[2951]: W0123 17:33:00.371908 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.371968 kubelet[2951]: E0123 17:33:00.371917 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.372073 kubelet[2951]: E0123 17:33:00.372052 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.372073 kubelet[2951]: W0123 17:33:00.372062 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.372073 kubelet[2951]: E0123 17:33:00.372070 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.372271 kubelet[2951]: E0123 17:33:00.372256 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.372271 kubelet[2951]: W0123 17:33:00.372269 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.372346 kubelet[2951]: E0123 17:33:00.372279 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.372769 kubelet[2951]: E0123 17:33:00.372737 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.372816 kubelet[2951]: W0123 17:33:00.372785 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.372816 kubelet[2951]: E0123 17:33:00.372798 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.373030 kubelet[2951]: E0123 17:33:00.373004 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.373030 kubelet[2951]: W0123 17:33:00.373018 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.373030 kubelet[2951]: E0123 17:33:00.373027 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.373346 kubelet[2951]: E0123 17:33:00.373186 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.373346 kubelet[2951]: W0123 17:33:00.373198 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.373346 kubelet[2951]: E0123 17:33:00.373207 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.373433 kubelet[2951]: E0123 17:33:00.373366 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.373433 kubelet[2951]: W0123 17:33:00.373374 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.373433 kubelet[2951]: E0123 17:33:00.373383 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.373629 kubelet[2951]: E0123 17:33:00.373596 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.373629 kubelet[2951]: W0123 17:33:00.373610 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.373629 kubelet[2951]: E0123 17:33:00.373619 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.374007 kubelet[2951]: E0123 17:33:00.373974 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.374007 kubelet[2951]: W0123 17:33:00.373989 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.374007 kubelet[2951]: E0123 17:33:00.373998 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.374268 kubelet[2951]: E0123 17:33:00.374243 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.374268 kubelet[2951]: W0123 17:33:00.374255 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.374268 kubelet[2951]: E0123 17:33:00.374264 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.374803 kubelet[2951]: E0123 17:33:00.374786 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.374850 kubelet[2951]: W0123 17:33:00.374805 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.374850 kubelet[2951]: E0123 17:33:00.374824 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.375018 kubelet[2951]: E0123 17:33:00.375001 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.375018 kubelet[2951]: W0123 17:33:00.375015 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.375058 kubelet[2951]: E0123 17:33:00.375025 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.375280 kubelet[2951]: E0123 17:33:00.375266 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.375307 kubelet[2951]: W0123 17:33:00.375281 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.375307 kubelet[2951]: E0123 17:33:00.375291 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.375586 kubelet[2951]: E0123 17:33:00.375572 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.375586 kubelet[2951]: W0123 17:33:00.375585 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.375638 kubelet[2951]: E0123 17:33:00.375595 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.375967 kubelet[2951]: E0123 17:33:00.375922 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.376166 kubelet[2951]: W0123 17:33:00.375952 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.376208 kubelet[2951]: E0123 17:33:00.376172 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.379374 kubelet[2951]: E0123 17:33:00.379349 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.379374 kubelet[2951]: W0123 17:33:00.379370 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.379466 kubelet[2951]: E0123 17:33:00.379387 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.379466 kubelet[2951]: I0123 17:33:00.379422 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk5ck\" (UniqueName: \"kubernetes.io/projected/decbe37e-6413-4ebb-af5d-fd959613c007-kube-api-access-fk5ck\") pod \"csi-node-driver-gfmmc\" (UID: \"decbe37e-6413-4ebb-af5d-fd959613c007\") " pod="calico-system/csi-node-driver-gfmmc" Jan 23 17:33:00.379642 kubelet[2951]: E0123 17:33:00.379622 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.379642 kubelet[2951]: W0123 17:33:00.379637 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.379873 kubelet[2951]: E0123 17:33:00.379648 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.379873 kubelet[2951]: I0123 17:33:00.379668 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/decbe37e-6413-4ebb-af5d-fd959613c007-registration-dir\") pod \"csi-node-driver-gfmmc\" (UID: \"decbe37e-6413-4ebb-af5d-fd959613c007\") " pod="calico-system/csi-node-driver-gfmmc" Jan 23 17:33:00.379980 kubelet[2951]: E0123 17:33:00.379963 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.380040 kubelet[2951]: W0123 17:33:00.380028 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.380095 kubelet[2951]: E0123 17:33:00.380085 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.380414 kubelet[2951]: E0123 17:33:00.380288 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.380414 kubelet[2951]: W0123 17:33:00.380300 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.380414 kubelet[2951]: E0123 17:33:00.380310 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.380570 kubelet[2951]: E0123 17:33:00.380555 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.380630 kubelet[2951]: W0123 17:33:00.380618 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.380682 kubelet[2951]: E0123 17:33:00.380672 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.381427 kubelet[2951]: E0123 17:33:00.381359 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.381782 kubelet[2951]: W0123 17:33:00.381762 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.382006 kubelet[2951]: E0123 17:33:00.381884 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.382315 kubelet[2951]: E0123 17:33:00.382294 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.382795 kubelet[2951]: W0123 17:33:00.382550 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.382795 kubelet[2951]: E0123 17:33:00.382589 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.382795 kubelet[2951]: I0123 17:33:00.382627 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/decbe37e-6413-4ebb-af5d-fd959613c007-socket-dir\") pod \"csi-node-driver-gfmmc\" (UID: \"decbe37e-6413-4ebb-af5d-fd959613c007\") " pod="calico-system/csi-node-driver-gfmmc" Jan 23 17:33:00.382973 kubelet[2951]: E0123 17:33:00.382956 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.383026 kubelet[2951]: W0123 17:33:00.383014 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.383161 kubelet[2951]: E0123 17:33:00.383148 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.383254 kubelet[2951]: I0123 17:33:00.383241 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/decbe37e-6413-4ebb-af5d-fd959613c007-kubelet-dir\") pod \"csi-node-driver-gfmmc\" (UID: \"decbe37e-6413-4ebb-af5d-fd959613c007\") " pod="calico-system/csi-node-driver-gfmmc" Jan 23 17:33:00.383850 kubelet[2951]: E0123 17:33:00.383825 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.383850 kubelet[2951]: W0123 17:33:00.383849 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.383907 kubelet[2951]: E0123 17:33:00.383863 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.384148 kubelet[2951]: E0123 17:33:00.384134 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.384176 kubelet[2951]: W0123 17:33:00.384149 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.384176 kubelet[2951]: E0123 17:33:00.384160 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.384392 kubelet[2951]: E0123 17:33:00.384369 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.384392 kubelet[2951]: W0123 17:33:00.384385 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.384898 kubelet[2951]: E0123 17:33:00.384394 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.384898 kubelet[2951]: I0123 17:33:00.384420 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/decbe37e-6413-4ebb-af5d-fd959613c007-varrun\") pod \"csi-node-driver-gfmmc\" (UID: \"decbe37e-6413-4ebb-af5d-fd959613c007\") " pod="calico-system/csi-node-driver-gfmmc" Jan 23 17:33:00.384898 kubelet[2951]: E0123 17:33:00.384674 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.384898 kubelet[2951]: W0123 17:33:00.384685 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.384898 kubelet[2951]: E0123 17:33:00.384695 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.385067 kubelet[2951]: E0123 17:33:00.385046 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.385067 kubelet[2951]: W0123 17:33:00.385064 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.385125 kubelet[2951]: E0123 17:33:00.385076 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.385351 kubelet[2951]: E0123 17:33:00.385336 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.385351 kubelet[2951]: W0123 17:33:00.385350 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.385414 kubelet[2951]: E0123 17:33:00.385361 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.385542 kubelet[2951]: E0123 17:33:00.385529 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.385542 kubelet[2951]: W0123 17:33:00.385539 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.385586 kubelet[2951]: E0123 17:33:00.385547 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.426877 containerd[1668]: time="2026-01-23T17:33:00.426832468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9hxr,Uid:2bb03569-9810-4339-813e-6388bcfc4a52,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:00.454303 containerd[1668]: time="2026-01-23T17:33:00.454155802Z" level=info msg="connecting to shim c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58" address="unix:///run/containerd/s/296ac1a804ce57877b98398516221dab8fb70a96d4a43154984b6f959837ec8f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:00.474973 systemd[1]: Started cri-containerd-c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58.scope - libcontainer container c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58. Jan 23 17:33:00.485873 kubelet[2951]: E0123 17:33:00.485799 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.485873 kubelet[2951]: W0123 17:33:00.485822 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.485873 kubelet[2951]: E0123 17:33:00.485842 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.486065 kubelet[2951]: E0123 17:33:00.486049 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.486065 kubelet[2951]: W0123 17:33:00.486060 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.486163 kubelet[2951]: E0123 17:33:00.486070 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.485000 audit: BPF prog-id=156 op=LOAD Jan 23 17:33:00.486282 kubelet[2951]: E0123 17:33:00.486268 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.486316 kubelet[2951]: W0123 17:33:00.486283 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.486316 kubelet[2951]: E0123 17:33:00.486297 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.486488 kubelet[2951]: E0123 17:33:00.486475 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.486517 kubelet[2951]: W0123 17:33:00.486488 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.486517 kubelet[2951]: E0123 17:33:00.486498 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.486000 audit: BPF prog-id=157 op=LOAD Jan 23 17:33:00.486000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3484 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335393430366131353565666636396531633763333134376234633763 Jan 23 17:33:00.486000 audit: BPF prog-id=157 op=UNLOAD Jan 23 17:33:00.486000 audit[3496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335393430366131353565666636396531633763333134376234633763 Jan 23 17:33:00.487183 kubelet[2951]: E0123 17:33:00.486716 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.487183 kubelet[2951]: W0123 17:33:00.486729 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.487183 kubelet[2951]: E0123 17:33:00.486738 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.487183 kubelet[2951]: E0123 17:33:00.487004 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.487183 kubelet[2951]: W0123 17:33:00.487014 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.487183 kubelet[2951]: E0123 17:33:00.487045 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.487314 kubelet[2951]: E0123 17:33:00.487221 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.487314 kubelet[2951]: W0123 17:33:00.487229 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.487314 kubelet[2951]: E0123 17:33:00.487238 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.487520 kubelet[2951]: E0123 17:33:00.487450 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.487520 kubelet[2951]: W0123 17:33:00.487464 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.487520 kubelet[2951]: E0123 17:33:00.487474 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.486000 audit: BPF prog-id=158 op=LOAD Jan 23 17:33:00.486000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3484 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335393430366131353565666636396531633763333134376234633763 Jan 23 17:33:00.487892 kubelet[2951]: E0123 17:33:00.487642 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.487892 kubelet[2951]: W0123 17:33:00.487650 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.487892 kubelet[2951]: E0123 17:33:00.487658 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.487892 kubelet[2951]: E0123 17:33:00.487883 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.487892 kubelet[2951]: W0123 17:33:00.487893 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.486000 audit: BPF prog-id=159 op=LOAD Jan 23 17:33:00.486000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3484 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335393430366131353565666636396531633763333134376234633763 Jan 23 17:33:00.487000 audit: BPF prog-id=159 op=UNLOAD Jan 23 17:33:00.488329 kubelet[2951]: E0123 17:33:00.487902 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.488329 kubelet[2951]: E0123 17:33:00.488072 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.488329 kubelet[2951]: W0123 17:33:00.488081 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.488329 kubelet[2951]: E0123 17:33:00.488097 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.488329 kubelet[2951]: E0123 17:33:00.488231 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.488329 kubelet[2951]: W0123 17:33:00.488240 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.488329 kubelet[2951]: E0123 17:33:00.488268 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.487000 audit[3496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335393430366131353565666636396531633763333134376234633763 Jan 23 17:33:00.487000 audit: BPF prog-id=158 op=UNLOAD Jan 23 17:33:00.487000 audit[3496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335393430366131353565666636396531633763333134376234633763 Jan 23 17:33:00.488834 kubelet[2951]: E0123 17:33:00.488516 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.488834 kubelet[2951]: W0123 17:33:00.488526 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.488834 kubelet[2951]: E0123 17:33:00.488535 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.488834 kubelet[2951]: E0123 17:33:00.488718 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.488834 kubelet[2951]: W0123 17:33:00.488727 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.488834 kubelet[2951]: E0123 17:33:00.488739 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.489096 kubelet[2951]: E0123 17:33:00.488942 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.489096 kubelet[2951]: W0123 17:33:00.488952 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.489096 kubelet[2951]: E0123 17:33:00.488961 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.489173 kubelet[2951]: E0123 17:33:00.489141 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.489173 kubelet[2951]: W0123 17:33:00.489151 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.489173 kubelet[2951]: E0123 17:33:00.489159 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.489331 kubelet[2951]: E0123 17:33:00.489316 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.489364 kubelet[2951]: W0123 17:33:00.489328 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.489364 kubelet[2951]: E0123 17:33:00.489356 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.487000 audit: BPF prog-id=160 op=LOAD Jan 23 17:33:00.487000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3484 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:00.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335393430366131353565666636396531633763333134376234633763 Jan 23 17:33:00.489745 kubelet[2951]: E0123 17:33:00.489529 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.489745 kubelet[2951]: W0123 17:33:00.489537 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.489745 kubelet[2951]: E0123 17:33:00.489546 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.489894 kubelet[2951]: E0123 17:33:00.489794 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.489894 kubelet[2951]: W0123 17:33:00.489804 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.489894 kubelet[2951]: E0123 17:33:00.489827 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.490848 kubelet[2951]: E0123 17:33:00.490829 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.491221 kubelet[2951]: W0123 17:33:00.490917 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.491477 kubelet[2951]: E0123 17:33:00.491307 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.491672 kubelet[2951]: E0123 17:33:00.491574 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.491672 kubelet[2951]: W0123 17:33:00.491587 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.491672 kubelet[2951]: E0123 17:33:00.491597 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.492037 kubelet[2951]: E0123 17:33:00.492021 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.492129 kubelet[2951]: W0123 17:33:00.492095 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.492273 kubelet[2951]: E0123 17:33:00.492257 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.492632 kubelet[2951]: E0123 17:33:00.492610 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.493225 kubelet[2951]: W0123 17:33:00.492749 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.493225 kubelet[2951]: E0123 17:33:00.493024 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.493430 kubelet[2951]: E0123 17:33:00.493415 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.493525 kubelet[2951]: W0123 17:33:00.493511 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.493590 kubelet[2951]: E0123 17:33:00.493577 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.493854 kubelet[2951]: E0123 17:33:00.493839 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.493953 kubelet[2951]: W0123 17:33:00.493941 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.494033 kubelet[2951]: E0123 17:33:00.494005 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.501111 kubelet[2951]: E0123 17:33:00.501086 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:00.501294 kubelet[2951]: W0123 17:33:00.501209 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:00.501294 kubelet[2951]: E0123 17:33:00.501232 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:00.508622 containerd[1668]: time="2026-01-23T17:33:00.508560949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9hxr,Uid:2bb03569-9810-4339-813e-6388bcfc4a52,Namespace:calico-system,Attempt:0,} returns sandbox id \"c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58\"" Jan 23 17:33:01.947782 kubelet[2951]: E0123 17:33:01.947676 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:02.289944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977020307.mount: Deactivated successfully. Jan 23 17:33:03.121542 containerd[1668]: time="2026-01-23T17:33:03.121486247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:03.122572 containerd[1668]: time="2026-01-23T17:33:03.122510412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33086690" Jan 23 17:33:03.123772 containerd[1668]: time="2026-01-23T17:33:03.123728458Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:03.125662 containerd[1668]: time="2026-01-23T17:33:03.125612707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:03.126533 containerd[1668]: time="2026-01-23T17:33:03.126181790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.943265958s" Jan 23 17:33:03.126533 containerd[1668]: time="2026-01-23T17:33:03.126213390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 17:33:03.127121 containerd[1668]: time="2026-01-23T17:33:03.127100714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 17:33:03.137452 containerd[1668]: time="2026-01-23T17:33:03.137414445Z" level=info msg="CreateContainer within sandbox \"3342ad8139199de2523bf59fd7a965a63c2615632f20bf1a2b2e6e4f812cb2f9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 17:33:03.145885 containerd[1668]: time="2026-01-23T17:33:03.145840246Z" level=info msg="Container 12bca8fee523d59e380f9ae9c69b79309fa7274d3e7469bfa0b80e581e611fba: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:33:03.152874 containerd[1668]: time="2026-01-23T17:33:03.152839321Z" level=info msg="CreateContainer within sandbox \"3342ad8139199de2523bf59fd7a965a63c2615632f20bf1a2b2e6e4f812cb2f9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"12bca8fee523d59e380f9ae9c69b79309fa7274d3e7469bfa0b80e581e611fba\"" Jan 23 17:33:03.153601 containerd[1668]: time="2026-01-23T17:33:03.153564564Z" level=info msg="StartContainer for \"12bca8fee523d59e380f9ae9c69b79309fa7274d3e7469bfa0b80e581e611fba\"" Jan 23 17:33:03.154932 containerd[1668]: time="2026-01-23T17:33:03.154908531Z" level=info msg="connecting to shim 12bca8fee523d59e380f9ae9c69b79309fa7274d3e7469bfa0b80e581e611fba" address="unix:///run/containerd/s/1ae6f856035a8fb80c31a0e9e14dea9a60d84d436065813e91dc13762dc03efd" protocol=ttrpc version=3 Jan 23 17:33:03.176211 systemd[1]: Started cri-containerd-12bca8fee523d59e380f9ae9c69b79309fa7274d3e7469bfa0b80e581e611fba.scope - libcontainer container 12bca8fee523d59e380f9ae9c69b79309fa7274d3e7469bfa0b80e581e611fba. Jan 23 17:33:03.187000 audit: BPF prog-id=161 op=LOAD Jan 23 17:33:03.188793 kernel: kauditd_printk_skb: 58 callbacks suppressed Jan 23 17:33:03.188843 kernel: audit: type=1334 audit(1769189583.187:552): prog-id=161 op=LOAD Jan 23 17:33:03.188000 audit: BPF prog-id=162 op=LOAD Jan 23 17:33:03.190152 kernel: audit: type=1334 audit(1769189583.188:553): prog-id=162 op=LOAD Jan 23 17:33:03.190180 kernel: audit: type=1300 audit(1769189583.188:553): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.188000 audit[3557]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.193051 kernel: audit: type=1327 audit(1769189583.188:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.188000 audit: BPF prog-id=162 op=UNLOAD Jan 23 17:33:03.188000 audit[3557]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.199373 kernel: audit: type=1334 audit(1769189583.188:554): prog-id=162 op=UNLOAD Jan 23 17:33:03.199428 kernel: audit: type=1300 audit(1769189583.188:554): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.199448 kernel: audit: type=1327 audit(1769189583.188:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.188000 audit: BPF prog-id=163 op=LOAD Jan 23 17:33:03.203315 kernel: audit: type=1334 audit(1769189583.188:555): prog-id=163 op=LOAD Jan 23 17:33:03.188000 audit[3557]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.206637 kernel: audit: type=1300 audit(1769189583.188:555): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.209684 kernel: audit: type=1327 audit(1769189583.188:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.189000 audit: BPF prog-id=164 op=LOAD Jan 23 17:33:03.189000 audit[3557]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.195000 audit: BPF prog-id=164 op=UNLOAD Jan 23 17:33:03.195000 audit[3557]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.195000 audit: BPF prog-id=163 op=UNLOAD Jan 23 17:33:03.195000 audit[3557]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.195000 audit: BPF prog-id=165 op=LOAD Jan 23 17:33:03.195000 audit[3557]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3387 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:03.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132626361386665653532336435396533383066396165396336396237 Jan 23 17:33:03.232708 containerd[1668]: time="2026-01-23T17:33:03.232672472Z" level=info msg="StartContainer for \"12bca8fee523d59e380f9ae9c69b79309fa7274d3e7469bfa0b80e581e611fba\" returns successfully" Jan 23 17:33:03.269134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003516745.mount: Deactivated successfully. Jan 23 17:33:03.947805 kubelet[2951]: E0123 17:33:03.947447 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:04.037891 kubelet[2951]: I0123 17:33:04.037798 2951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5859b4f675-96kln" podStartSLOduration=2.093106537 podStartE2EDuration="5.037736662s" podCreationTimestamp="2026-01-23 17:32:59 +0000 UTC" firstStartedPulling="2026-01-23 17:33:00.182357509 +0000 UTC m=+21.307663003" lastFinishedPulling="2026-01-23 17:33:03.126987594 +0000 UTC m=+24.252293128" observedRunningTime="2026-01-23 17:33:04.036921978 +0000 UTC m=+25.162227512" watchObservedRunningTime="2026-01-23 17:33:04.037736662 +0000 UTC m=+25.163042156" Jan 23 17:33:04.100503 kubelet[2951]: E0123 17:33:04.100471 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.100503 kubelet[2951]: W0123 17:33:04.100494 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.100654 kubelet[2951]: E0123 17:33:04.100513 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.100700 kubelet[2951]: E0123 17:33:04.100686 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.100739 kubelet[2951]: W0123 17:33:04.100696 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.100783 kubelet[2951]: E0123 17:33:04.100748 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.100948 kubelet[2951]: E0123 17:33:04.100934 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.100948 kubelet[2951]: W0123 17:33:04.100945 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.100998 kubelet[2951]: E0123 17:33:04.100954 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.101107 kubelet[2951]: E0123 17:33:04.101093 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.101107 kubelet[2951]: W0123 17:33:04.101103 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.101147 kubelet[2951]: E0123 17:33:04.101111 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.101280 kubelet[2951]: E0123 17:33:04.101247 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.101280 kubelet[2951]: W0123 17:33:04.101259 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.101280 kubelet[2951]: E0123 17:33:04.101278 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.101420 kubelet[2951]: E0123 17:33:04.101393 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.101420 kubelet[2951]: W0123 17:33:04.101405 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.101420 kubelet[2951]: E0123 17:33:04.101413 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.101539 kubelet[2951]: E0123 17:33:04.101529 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.101539 kubelet[2951]: W0123 17:33:04.101538 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.101587 kubelet[2951]: E0123 17:33:04.101546 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.101683 kubelet[2951]: E0123 17:33:04.101673 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.101705 kubelet[2951]: W0123 17:33:04.101683 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.101705 kubelet[2951]: E0123 17:33:04.101690 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.101862 kubelet[2951]: E0123 17:33:04.101852 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.101895 kubelet[2951]: W0123 17:33:04.101862 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.101895 kubelet[2951]: E0123 17:33:04.101880 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.102007 kubelet[2951]: E0123 17:33:04.101995 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.102007 kubelet[2951]: W0123 17:33:04.102005 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.102052 kubelet[2951]: E0123 17:33:04.102013 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.102145 kubelet[2951]: E0123 17:33:04.102135 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.102170 kubelet[2951]: W0123 17:33:04.102144 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.102170 kubelet[2951]: E0123 17:33:04.102152 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.102289 kubelet[2951]: E0123 17:33:04.102279 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.102312 kubelet[2951]: W0123 17:33:04.102289 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.102312 kubelet[2951]: E0123 17:33:04.102297 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.102440 kubelet[2951]: E0123 17:33:04.102430 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.102465 kubelet[2951]: W0123 17:33:04.102440 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.102465 kubelet[2951]: E0123 17:33:04.102447 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.102588 kubelet[2951]: E0123 17:33:04.102579 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.102610 kubelet[2951]: W0123 17:33:04.102588 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.102610 kubelet[2951]: E0123 17:33:04.102596 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.102743 kubelet[2951]: E0123 17:33:04.102728 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.102743 kubelet[2951]: W0123 17:33:04.102738 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.102826 kubelet[2951]: E0123 17:33:04.102745 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.111369 kubelet[2951]: E0123 17:33:04.111347 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.111369 kubelet[2951]: W0123 17:33:04.111364 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.111459 kubelet[2951]: E0123 17:33:04.111376 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.111583 kubelet[2951]: E0123 17:33:04.111567 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.111583 kubelet[2951]: W0123 17:33:04.111580 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.111630 kubelet[2951]: E0123 17:33:04.111591 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.111808 kubelet[2951]: E0123 17:33:04.111791 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.111839 kubelet[2951]: W0123 17:33:04.111807 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.111839 kubelet[2951]: E0123 17:33:04.111818 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.112006 kubelet[2951]: E0123 17:33:04.111989 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.112006 kubelet[2951]: W0123 17:33:04.112003 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.112049 kubelet[2951]: E0123 17:33:04.112012 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.112147 kubelet[2951]: E0123 17:33:04.112136 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.112169 kubelet[2951]: W0123 17:33:04.112146 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.112189 kubelet[2951]: E0123 17:33:04.112163 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.112323 kubelet[2951]: E0123 17:33:04.112312 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.112349 kubelet[2951]: W0123 17:33:04.112322 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.112349 kubelet[2951]: E0123 17:33:04.112330 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.112568 kubelet[2951]: E0123 17:33:04.112538 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.112568 kubelet[2951]: W0123 17:33:04.112553 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.112568 kubelet[2951]: E0123 17:33:04.112562 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.112733 kubelet[2951]: E0123 17:33:04.112720 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.112733 kubelet[2951]: W0123 17:33:04.112732 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.112795 kubelet[2951]: E0123 17:33:04.112742 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.112922 kubelet[2951]: E0123 17:33:04.112897 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.112922 kubelet[2951]: W0123 17:33:04.112921 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.112965 kubelet[2951]: E0123 17:33:04.112930 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.113070 kubelet[2951]: E0123 17:33:04.113058 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.113070 kubelet[2951]: W0123 17:33:04.113068 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.113120 kubelet[2951]: E0123 17:33:04.113076 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.113224 kubelet[2951]: E0123 17:33:04.113213 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.113224 kubelet[2951]: W0123 17:33:04.113223 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.113264 kubelet[2951]: E0123 17:33:04.113231 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.113491 kubelet[2951]: E0123 17:33:04.113472 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.113491 kubelet[2951]: W0123 17:33:04.113489 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.113546 kubelet[2951]: E0123 17:33:04.113501 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.113677 kubelet[2951]: E0123 17:33:04.113665 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.113677 kubelet[2951]: W0123 17:33:04.113676 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.113727 kubelet[2951]: E0123 17:33:04.113684 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.113861 kubelet[2951]: E0123 17:33:04.113843 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.113861 kubelet[2951]: W0123 17:33:04.113854 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.113918 kubelet[2951]: E0123 17:33:04.113863 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.114021 kubelet[2951]: E0123 17:33:04.114008 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.114021 kubelet[2951]: W0123 17:33:04.114018 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.114062 kubelet[2951]: E0123 17:33:04.114026 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.114184 kubelet[2951]: E0123 17:33:04.114174 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.114208 kubelet[2951]: W0123 17:33:04.114184 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.114208 kubelet[2951]: E0123 17:33:04.114194 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.114432 kubelet[2951]: E0123 17:33:04.114419 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.114457 kubelet[2951]: W0123 17:33:04.114432 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.114457 kubelet[2951]: E0123 17:33:04.114443 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.114617 kubelet[2951]: E0123 17:33:04.114606 2951 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:33:04.114643 kubelet[2951]: W0123 17:33:04.114617 2951 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:33:04.114643 kubelet[2951]: E0123 17:33:04.114625 2951 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:33:04.886851 containerd[1668]: time="2026-01-23T17:33:04.886727946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:04.888309 containerd[1668]: time="2026-01-23T17:33:04.887989792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:04.889483 containerd[1668]: time="2026-01-23T17:33:04.889452360Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:04.892025 containerd[1668]: time="2026-01-23T17:33:04.891969292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:04.892612 containerd[1668]: time="2026-01-23T17:33:04.892583775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.76539138s" Jan 23 17:33:04.892644 containerd[1668]: time="2026-01-23T17:33:04.892616655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 17:33:04.897727 containerd[1668]: time="2026-01-23T17:33:04.897192038Z" level=info msg="CreateContainer within sandbox \"c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 17:33:04.905784 containerd[1668]: time="2026-01-23T17:33:04.905004396Z" level=info msg="Container a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:33:04.912687 containerd[1668]: time="2026-01-23T17:33:04.912622913Z" level=info msg="CreateContainer within sandbox \"c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193\"" Jan 23 17:33:04.913537 containerd[1668]: time="2026-01-23T17:33:04.913498398Z" level=info msg="StartContainer for \"a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193\"" Jan 23 17:33:04.915316 containerd[1668]: time="2026-01-23T17:33:04.915290046Z" level=info msg="connecting to shim a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193" address="unix:///run/containerd/s/296ac1a804ce57877b98398516221dab8fb70a96d4a43154984b6f959837ec8f" protocol=ttrpc version=3 Jan 23 17:33:04.935959 systemd[1]: Started cri-containerd-a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193.scope - libcontainer container a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193. Jan 23 17:33:04.996000 audit: BPF prog-id=166 op=LOAD Jan 23 17:33:04.996000 audit[3636]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3484 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:04.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133636465393636333331383862623861303537646636616638313565 Jan 23 17:33:04.996000 audit: BPF prog-id=167 op=LOAD Jan 23 17:33:04.996000 audit[3636]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3484 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:04.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133636465393636333331383862623861303537646636616638313565 Jan 23 17:33:04.996000 audit: BPF prog-id=167 op=UNLOAD Jan 23 17:33:04.996000 audit[3636]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:04.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133636465393636333331383862623861303537646636616638313565 Jan 23 17:33:04.996000 audit: BPF prog-id=166 op=UNLOAD Jan 23 17:33:04.996000 audit[3636]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:04.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133636465393636333331383862623861303537646636616638313565 Jan 23 17:33:04.996000 audit: BPF prog-id=168 op=LOAD Jan 23 17:33:04.996000 audit[3636]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3484 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:04.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133636465393636333331383862623861303537646636616638313565 Jan 23 17:33:05.020989 containerd[1668]: time="2026-01-23T17:33:05.020948765Z" level=info msg="StartContainer for \"a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193\" returns successfully" Jan 23 17:33:05.030798 kubelet[2951]: I0123 17:33:05.030747 2951 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:33:05.036066 systemd[1]: cri-containerd-a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193.scope: Deactivated successfully. Jan 23 17:33:05.039930 containerd[1668]: time="2026-01-23T17:33:05.039863177Z" level=info msg="received container exit event container_id:\"a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193\" id:\"a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193\" pid:3649 exited_at:{seconds:1769189585 nanos:39394415}" Jan 23 17:33:05.039000 audit: BPF prog-id=168 op=UNLOAD Jan 23 17:33:05.063823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3cde96633188bb8a057df6af815e2fd8b8bdb3af3ec132d8c988f54ed7a4193-rootfs.mount: Deactivated successfully. Jan 23 17:33:05.949301 kubelet[2951]: E0123 17:33:05.948112 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:07.948609 kubelet[2951]: E0123 17:33:07.948270 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:09.040390 containerd[1668]: time="2026-01-23T17:33:09.040344882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 17:33:09.948041 kubelet[2951]: E0123 17:33:09.947962 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:11.630911 containerd[1668]: time="2026-01-23T17:33:11.630857789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:11.632487 containerd[1668]: time="2026-01-23T17:33:11.632445637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 23 17:33:11.633451 containerd[1668]: time="2026-01-23T17:33:11.633426482Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:11.635825 containerd[1668]: time="2026-01-23T17:33:11.635691613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:11.636291 containerd[1668]: time="2026-01-23T17:33:11.636268696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.595883534s" Jan 23 17:33:11.636334 containerd[1668]: time="2026-01-23T17:33:11.636298976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 17:33:11.640047 containerd[1668]: time="2026-01-23T17:33:11.640016394Z" level=info msg="CreateContainer within sandbox \"c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 17:33:11.647936 containerd[1668]: time="2026-01-23T17:33:11.647896273Z" level=info msg="Container 3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:33:11.655633 containerd[1668]: time="2026-01-23T17:33:11.655591911Z" level=info msg="CreateContainer within sandbox \"c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f\"" Jan 23 17:33:11.656084 containerd[1668]: time="2026-01-23T17:33:11.656063953Z" level=info msg="StartContainer for \"3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f\"" Jan 23 17:33:11.657464 containerd[1668]: time="2026-01-23T17:33:11.657429640Z" level=info msg="connecting to shim 3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f" address="unix:///run/containerd/s/296ac1a804ce57877b98398516221dab8fb70a96d4a43154984b6f959837ec8f" protocol=ttrpc version=3 Jan 23 17:33:11.676216 systemd[1]: Started cri-containerd-3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f.scope - libcontainer container 3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f. Jan 23 17:33:11.732000 audit: BPF prog-id=169 op=LOAD Jan 23 17:33:11.734002 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 23 17:33:11.734101 kernel: audit: type=1334 audit(1769189591.732:566): prog-id=169 op=LOAD Jan 23 17:33:11.732000 audit[3699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3484 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:11.737622 kernel: audit: type=1300 audit(1769189591.732:566): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3484 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:11.737675 kernel: audit: type=1327 audit(1769189591.732:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323431646137313162306630633166366232616132393836303437 Jan 23 17:33:11.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323431646137313162306630633166366232616132393836303437 Jan 23 17:33:11.732000 audit: BPF prog-id=170 op=LOAD Jan 23 17:33:11.741197 kernel: audit: type=1334 audit(1769189591.732:567): prog-id=170 op=LOAD Jan 23 17:33:11.732000 audit[3699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3484 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:11.744231 kernel: audit: type=1300 audit(1769189591.732:567): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3484 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:11.744395 kernel: audit: type=1327 audit(1769189591.732:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323431646137313162306630633166366232616132393836303437 Jan 23 17:33:11.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323431646137313162306630633166366232616132393836303437 Jan 23 17:33:11.733000 audit: BPF prog-id=170 op=UNLOAD Jan 23 17:33:11.748049 kernel: audit: type=1334 audit(1769189591.733:568): prog-id=170 op=UNLOAD Jan 23 17:33:11.733000 audit[3699]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:11.751354 kernel: audit: type=1300 audit(1769189591.733:568): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:11.751433 kernel: audit: type=1327 audit(1769189591.733:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323431646137313162306630633166366232616132393836303437 Jan 23 17:33:11.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323431646137313162306630633166366232616132393836303437 Jan 23 17:33:11.733000 audit: BPF prog-id=169 op=UNLOAD Jan 23 17:33:11.754958 kernel: audit: type=1334 audit(1769189591.733:569): prog-id=169 op=UNLOAD Jan 23 17:33:11.733000 audit[3699]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:11.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323431646137313162306630633166366232616132393836303437 Jan 23 17:33:11.733000 audit: BPF prog-id=171 op=LOAD Jan 23 17:33:11.733000 audit[3699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3484 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:11.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323431646137313162306630633166366232616132393836303437 Jan 23 17:33:11.766392 containerd[1668]: time="2026-01-23T17:33:11.766344734Z" level=info msg="StartContainer for \"3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f\" returns successfully" Jan 23 17:33:11.948258 kubelet[2951]: E0123 17:33:11.947587 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:13.024801 systemd[1]: cri-containerd-3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f.scope: Deactivated successfully. Jan 23 17:33:13.025126 systemd[1]: cri-containerd-3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f.scope: Consumed 463ms CPU time, 185.6M memory peak, 165.9M written to disk. Jan 23 17:33:13.026906 containerd[1668]: time="2026-01-23T17:33:13.026872558Z" level=info msg="received container exit event container_id:\"3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f\" id:\"3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f\" pid:3712 exited_at:{seconds:1769189593 nanos:26586076}" Jan 23 17:33:13.028000 audit: BPF prog-id=171 op=UNLOAD Jan 23 17:33:13.048781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e241da711b0f0c1f6b2aa29860478aab2cf91b3e012ca0fc3c45db30fd9f07f-rootfs.mount: Deactivated successfully. Jan 23 17:33:13.077483 kubelet[2951]: I0123 17:33:13.077456 2951 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 17:33:14.484933 kubelet[2951]: I0123 17:33:14.484878 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96e04b17-1bf7-4d71-9569-39faa03d6952-config-volume\") pod \"coredns-674b8bbfcf-qcpzs\" (UID: \"96e04b17-1bf7-4d71-9569-39faa03d6952\") " pod="kube-system/coredns-674b8bbfcf-qcpzs" Jan 23 17:33:14.485478 kubelet[2951]: I0123 17:33:14.485306 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbwsp\" (UniqueName: \"kubernetes.io/projected/96e04b17-1bf7-4d71-9569-39faa03d6952-kube-api-access-lbwsp\") pod \"coredns-674b8bbfcf-qcpzs\" (UID: \"96e04b17-1bf7-4d71-9569-39faa03d6952\") " pod="kube-system/coredns-674b8bbfcf-qcpzs" Jan 23 17:33:14.501012 systemd[1]: Created slice kubepods-besteffort-pod287c72fe_ff92_46ec_9e19_273465100dda.slice - libcontainer container kubepods-besteffort-pod287c72fe_ff92_46ec_9e19_273465100dda.slice. Jan 23 17:33:14.510413 systemd[1]: Created slice kubepods-burstable-pod96e04b17_1bf7_4d71_9569_39faa03d6952.slice - libcontainer container kubepods-burstable-pod96e04b17_1bf7_4d71_9569_39faa03d6952.slice. Jan 23 17:33:14.518611 systemd[1]: Created slice kubepods-besteffort-pode9193ceb_0e99_470a_bb0b_413b40f3616d.slice - libcontainer container kubepods-besteffort-pode9193ceb_0e99_470a_bb0b_413b40f3616d.slice. Jan 23 17:33:14.524934 systemd[1]: Created slice kubepods-besteffort-poddecbe37e_6413_4ebb_af5d_fd959613c007.slice - libcontainer container kubepods-besteffort-poddecbe37e_6413_4ebb_af5d_fd959613c007.slice. Jan 23 17:33:14.528200 containerd[1668]: time="2026-01-23T17:33:14.528156762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfmmc,Uid:decbe37e-6413-4ebb-af5d-fd959613c007,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:14.537047 systemd[1]: Created slice kubepods-burstable-pod0dc12ef4_de69_4bd9_8385_4e2f0df15cde.slice - libcontainer container kubepods-burstable-pod0dc12ef4_de69_4bd9_8385_4e2f0df15cde.slice. Jan 23 17:33:14.545095 systemd[1]: Created slice kubepods-besteffort-pod49fbdce9_bd52_4d37_a0e6_5d7fa3a92ded.slice - libcontainer container kubepods-besteffort-pod49fbdce9_bd52_4d37_a0e6_5d7fa3a92ded.slice. Jan 23 17:33:14.551191 systemd[1]: Created slice kubepods-besteffort-pod0cda543d_2a18_4d2d_aa42_b11c9c8288f8.slice - libcontainer container kubepods-besteffort-pod0cda543d_2a18_4d2d_aa42_b11c9c8288f8.slice. Jan 23 17:33:14.562489 systemd[1]: Created slice kubepods-besteffort-pode5dae696_8fe0_4228_b217_55297601b045.slice - libcontainer container kubepods-besteffort-pode5dae696_8fe0_4228_b217_55297601b045.slice. Jan 23 17:33:14.585615 kubelet[2951]: I0123 17:33:14.585571 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dc12ef4-de69-4bd9-8385-4e2f0df15cde-config-volume\") pod \"coredns-674b8bbfcf-fwt87\" (UID: \"0dc12ef4-de69-4bd9-8385-4e2f0df15cde\") " pod="kube-system/coredns-674b8bbfcf-fwt87" Jan 23 17:33:14.585785 kubelet[2951]: I0123 17:33:14.585635 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/287c72fe-ff92-46ec-9e19-273465100dda-calico-apiserver-certs\") pod \"calico-apiserver-6686778f54-qdppv\" (UID: \"287c72fe-ff92-46ec-9e19-273465100dda\") " pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" Jan 23 17:33:14.585785 kubelet[2951]: I0123 17:33:14.585662 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9193ceb-0e99-470a-bb0b-413b40f3616d-tigera-ca-bundle\") pod \"calico-kube-controllers-5f66fb8c4-942js\" (UID: \"e9193ceb-0e99-470a-bb0b-413b40f3616d\") " pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" Jan 23 17:33:14.585874 kubelet[2951]: I0123 17:33:14.585745 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f28f7\" (UniqueName: \"kubernetes.io/projected/e9193ceb-0e99-470a-bb0b-413b40f3616d-kube-api-access-f28f7\") pod \"calico-kube-controllers-5f66fb8c4-942js\" (UID: \"e9193ceb-0e99-470a-bb0b-413b40f3616d\") " pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" Jan 23 17:33:14.585874 kubelet[2951]: I0123 17:33:14.585814 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8x7v\" (UniqueName: \"kubernetes.io/projected/0dc12ef4-de69-4bd9-8385-4e2f0df15cde-kube-api-access-c8x7v\") pod \"coredns-674b8bbfcf-fwt87\" (UID: \"0dc12ef4-de69-4bd9-8385-4e2f0df15cde\") " pod="kube-system/coredns-674b8bbfcf-fwt87" Jan 23 17:33:14.585874 kubelet[2951]: I0123 17:33:14.585836 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjstv\" (UniqueName: \"kubernetes.io/projected/e5dae696-8fe0-4228-b217-55297601b045-kube-api-access-vjstv\") pod \"whisker-75b5b6594f-vrvw9\" (UID: \"e5dae696-8fe0-4228-b217-55297601b045\") " pod="calico-system/whisker-75b5b6594f-vrvw9" Jan 23 17:33:14.585968 kubelet[2951]: I0123 17:33:14.585882 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0cda543d-2a18-4d2d-aa42-b11c9c8288f8-calico-apiserver-certs\") pod \"calico-apiserver-6686778f54-7m9rh\" (UID: \"0cda543d-2a18-4d2d-aa42-b11c9c8288f8\") " pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" Jan 23 17:33:14.585968 kubelet[2951]: I0123 17:33:14.585905 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded-config\") pod \"goldmane-666569f655-hp496\" (UID: \"49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded\") " pod="calico-system/goldmane-666569f655-hp496" Jan 23 17:33:14.585968 kubelet[2951]: I0123 17:33:14.585966 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd9b\" (UniqueName: \"kubernetes.io/projected/0cda543d-2a18-4d2d-aa42-b11c9c8288f8-kube-api-access-mqd9b\") pod \"calico-apiserver-6686778f54-7m9rh\" (UID: \"0cda543d-2a18-4d2d-aa42-b11c9c8288f8\") " pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" Jan 23 17:33:14.586076 kubelet[2951]: I0123 17:33:14.585988 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btn72\" (UniqueName: \"kubernetes.io/projected/287c72fe-ff92-46ec-9e19-273465100dda-kube-api-access-btn72\") pod \"calico-apiserver-6686778f54-qdppv\" (UID: \"287c72fe-ff92-46ec-9e19-273465100dda\") " pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" Jan 23 17:33:14.586076 kubelet[2951]: I0123 17:33:14.586069 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded-goldmane-ca-bundle\") pod \"goldmane-666569f655-hp496\" (UID: \"49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded\") " pod="calico-system/goldmane-666569f655-hp496" Jan 23 17:33:14.586204 kubelet[2951]: I0123 17:33:14.586185 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded-goldmane-key-pair\") pod \"goldmane-666569f655-hp496\" (UID: \"49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded\") " pod="calico-system/goldmane-666569f655-hp496" Jan 23 17:33:14.586263 kubelet[2951]: I0123 17:33:14.586212 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88zgr\" (UniqueName: \"kubernetes.io/projected/49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded-kube-api-access-88zgr\") pod \"goldmane-666569f655-hp496\" (UID: \"49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded\") " pod="calico-system/goldmane-666569f655-hp496" Jan 23 17:33:14.586263 kubelet[2951]: I0123 17:33:14.586231 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e5dae696-8fe0-4228-b217-55297601b045-whisker-backend-key-pair\") pod \"whisker-75b5b6594f-vrvw9\" (UID: \"e5dae696-8fe0-4228-b217-55297601b045\") " pod="calico-system/whisker-75b5b6594f-vrvw9" Jan 23 17:33:14.586381 kubelet[2951]: I0123 17:33:14.586349 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5dae696-8fe0-4228-b217-55297601b045-whisker-ca-bundle\") pod \"whisker-75b5b6594f-vrvw9\" (UID: \"e5dae696-8fe0-4228-b217-55297601b045\") " pod="calico-system/whisker-75b5b6594f-vrvw9" Jan 23 17:33:14.610423 containerd[1668]: time="2026-01-23T17:33:14.610339165Z" level=error msg="Failed to destroy network for sandbox \"ea7177db139a99ef04c6e83e9088c9e24924ed2148ebb6cce02bbbd1285399cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.612468 systemd[1]: run-netns-cni\x2db0646a86\x2df4bc\x2dabef\x2d5839\x2d46ec2f4ce3a9.mount: Deactivated successfully. Jan 23 17:33:14.615009 containerd[1668]: time="2026-01-23T17:33:14.614945548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfmmc,Uid:decbe37e-6413-4ebb-af5d-fd959613c007,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7177db139a99ef04c6e83e9088c9e24924ed2148ebb6cce02bbbd1285399cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.615369 kubelet[2951]: E0123 17:33:14.615330 2951 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7177db139a99ef04c6e83e9088c9e24924ed2148ebb6cce02bbbd1285399cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.615436 kubelet[2951]: E0123 17:33:14.615402 2951 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7177db139a99ef04c6e83e9088c9e24924ed2148ebb6cce02bbbd1285399cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gfmmc" Jan 23 17:33:14.615436 kubelet[2951]: E0123 17:33:14.615421 2951 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7177db139a99ef04c6e83e9088c9e24924ed2148ebb6cce02bbbd1285399cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gfmmc" Jan 23 17:33:14.615494 kubelet[2951]: E0123 17:33:14.615463 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea7177db139a99ef04c6e83e9088c9e24924ed2148ebb6cce02bbbd1285399cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:14.809075 containerd[1668]: time="2026-01-23T17:33:14.808270656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686778f54-qdppv,Uid:287c72fe-ff92-46ec-9e19-273465100dda,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:33:14.815310 containerd[1668]: time="2026-01-23T17:33:14.815167290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qcpzs,Uid:96e04b17-1bf7-4d71-9569-39faa03d6952,Namespace:kube-system,Attempt:0,}" Jan 23 17:33:14.821873 containerd[1668]: time="2026-01-23T17:33:14.821827963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f66fb8c4-942js,Uid:e9193ceb-0e99-470a-bb0b-413b40f3616d,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:14.843048 containerd[1668]: time="2026-01-23T17:33:14.842983307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fwt87,Uid:0dc12ef4-de69-4bd9-8385-4e2f0df15cde,Namespace:kube-system,Attempt:0,}" Jan 23 17:33:14.850498 containerd[1668]: time="2026-01-23T17:33:14.850177342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hp496,Uid:49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:14.855958 containerd[1668]: time="2026-01-23T17:33:14.855917610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686778f54-7m9rh,Uid:0cda543d-2a18-4d2d-aa42-b11c9c8288f8,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:33:14.870798 containerd[1668]: time="2026-01-23T17:33:14.870598202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b5b6594f-vrvw9,Uid:e5dae696-8fe0-4228-b217-55297601b045,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:14.874654 containerd[1668]: time="2026-01-23T17:33:14.874607542Z" level=error msg="Failed to destroy network for sandbox \"0dd4a956e2bc729056904593285cb6009161fca5d9aea5e630d1f09cd4766314\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.885016 containerd[1668]: time="2026-01-23T17:33:14.884971432Z" level=error msg="Failed to destroy network for sandbox \"e35a8eed749cbbb19f118fd5f1c4f3cf2dc5c784ab9b476372120f0713585edc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.888256 containerd[1668]: time="2026-01-23T17:33:14.888198688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686778f54-qdppv,Uid:287c72fe-ff92-46ec-9e19-273465100dda,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dd4a956e2bc729056904593285cb6009161fca5d9aea5e630d1f09cd4766314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.888669 kubelet[2951]: E0123 17:33:14.888537 2951 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dd4a956e2bc729056904593285cb6009161fca5d9aea5e630d1f09cd4766314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.889076 kubelet[2951]: E0123 17:33:14.888714 2951 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dd4a956e2bc729056904593285cb6009161fca5d9aea5e630d1f09cd4766314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" Jan 23 17:33:14.889127 kubelet[2951]: E0123 17:33:14.889080 2951 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dd4a956e2bc729056904593285cb6009161fca5d9aea5e630d1f09cd4766314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" Jan 23 17:33:14.889161 kubelet[2951]: E0123 17:33:14.889137 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6686778f54-qdppv_calico-apiserver(287c72fe-ff92-46ec-9e19-273465100dda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6686778f54-qdppv_calico-apiserver(287c72fe-ff92-46ec-9e19-273465100dda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dd4a956e2bc729056904593285cb6009161fca5d9aea5e630d1f09cd4766314\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:33:14.889949 containerd[1668]: time="2026-01-23T17:33:14.889899137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qcpzs,Uid:96e04b17-1bf7-4d71-9569-39faa03d6952,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35a8eed749cbbb19f118fd5f1c4f3cf2dc5c784ab9b476372120f0713585edc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.891592 kubelet[2951]: E0123 17:33:14.891367 2951 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35a8eed749cbbb19f118fd5f1c4f3cf2dc5c784ab9b476372120f0713585edc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.891592 kubelet[2951]: E0123 17:33:14.891410 2951 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35a8eed749cbbb19f118fd5f1c4f3cf2dc5c784ab9b476372120f0713585edc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qcpzs" Jan 23 17:33:14.891592 kubelet[2951]: E0123 17:33:14.891458 2951 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35a8eed749cbbb19f118fd5f1c4f3cf2dc5c784ab9b476372120f0713585edc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qcpzs" Jan 23 17:33:14.891987 kubelet[2951]: E0123 17:33:14.891507 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qcpzs_kube-system(96e04b17-1bf7-4d71-9569-39faa03d6952)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qcpzs_kube-system(96e04b17-1bf7-4d71-9569-39faa03d6952)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e35a8eed749cbbb19f118fd5f1c4f3cf2dc5c784ab9b476372120f0713585edc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qcpzs" podUID="96e04b17-1bf7-4d71-9569-39faa03d6952" Jan 23 17:33:14.917791 containerd[1668]: time="2026-01-23T17:33:14.917650353Z" level=error msg="Failed to destroy network for sandbox \"8f1f4ffab4df4c300e12f51fcf8f761fc71ed1c251fd94dcc8c2c71c6979d598\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.923223 containerd[1668]: time="2026-01-23T17:33:14.922896819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fwt87,Uid:0dc12ef4-de69-4bd9-8385-4e2f0df15cde,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1f4ffab4df4c300e12f51fcf8f761fc71ed1c251fd94dcc8c2c71c6979d598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.923353 kubelet[2951]: E0123 17:33:14.923145 2951 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1f4ffab4df4c300e12f51fcf8f761fc71ed1c251fd94dcc8c2c71c6979d598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.923353 kubelet[2951]: E0123 17:33:14.923204 2951 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1f4ffab4df4c300e12f51fcf8f761fc71ed1c251fd94dcc8c2c71c6979d598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fwt87" Jan 23 17:33:14.923353 kubelet[2951]: E0123 17:33:14.923227 2951 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1f4ffab4df4c300e12f51fcf8f761fc71ed1c251fd94dcc8c2c71c6979d598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fwt87" Jan 23 17:33:14.923520 kubelet[2951]: E0123 17:33:14.923283 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fwt87_kube-system(0dc12ef4-de69-4bd9-8385-4e2f0df15cde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fwt87_kube-system(0dc12ef4-de69-4bd9-8385-4e2f0df15cde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f1f4ffab4df4c300e12f51fcf8f761fc71ed1c251fd94dcc8c2c71c6979d598\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fwt87" podUID="0dc12ef4-de69-4bd9-8385-4e2f0df15cde" Jan 23 17:33:14.923676 containerd[1668]: time="2026-01-23T17:33:14.923643542Z" level=error msg="Failed to destroy network for sandbox \"d1ad1fe823527a3021a14c3a0bf30f757c9be04999e8cd68cf1c99d8066743b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.926209 containerd[1668]: time="2026-01-23T17:33:14.926163435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f66fb8c4-942js,Uid:e9193ceb-0e99-470a-bb0b-413b40f3616d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1ad1fe823527a3021a14c3a0bf30f757c9be04999e8cd68cf1c99d8066743b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.926517 kubelet[2951]: E0123 17:33:14.926478 2951 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1ad1fe823527a3021a14c3a0bf30f757c9be04999e8cd68cf1c99d8066743b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.926573 kubelet[2951]: E0123 17:33:14.926560 2951 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1ad1fe823527a3021a14c3a0bf30f757c9be04999e8cd68cf1c99d8066743b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" Jan 23 17:33:14.926599 kubelet[2951]: E0123 17:33:14.926581 2951 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1ad1fe823527a3021a14c3a0bf30f757c9be04999e8cd68cf1c99d8066743b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" Jan 23 17:33:14.926680 kubelet[2951]: E0123 17:33:14.926626 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f66fb8c4-942js_calico-system(e9193ceb-0e99-470a-bb0b-413b40f3616d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f66fb8c4-942js_calico-system(e9193ceb-0e99-470a-bb0b-413b40f3616d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1ad1fe823527a3021a14c3a0bf30f757c9be04999e8cd68cf1c99d8066743b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:33:14.942375 containerd[1668]: time="2026-01-23T17:33:14.942328474Z" level=error msg="Failed to destroy network for sandbox \"70ede8b601d8fe81f2faa690188456d1ad3c11174c7c21847f92df217438cfd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.945312 containerd[1668]: time="2026-01-23T17:33:14.945182088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hp496,Uid:49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ede8b601d8fe81f2faa690188456d1ad3c11174c7c21847f92df217438cfd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.945692 kubelet[2951]: E0123 17:33:14.945646 2951 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ede8b601d8fe81f2faa690188456d1ad3c11174c7c21847f92df217438cfd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.945692 kubelet[2951]: E0123 17:33:14.945709 2951 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ede8b601d8fe81f2faa690188456d1ad3c11174c7c21847f92df217438cfd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hp496" Jan 23 17:33:14.945882 kubelet[2951]: E0123 17:33:14.945833 2951 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ede8b601d8fe81f2faa690188456d1ad3c11174c7c21847f92df217438cfd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hp496" Jan 23 17:33:14.946032 kubelet[2951]: E0123 17:33:14.945993 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hp496_calico-system(49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hp496_calico-system(49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70ede8b601d8fe81f2faa690188456d1ad3c11174c7c21847f92df217438cfd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:33:14.950020 containerd[1668]: time="2026-01-23T17:33:14.949940391Z" level=error msg="Failed to destroy network for sandbox \"0878983ec52cbda9cbb264b2767cdad99b26de8087afced876bb1574a4e1cd35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.953140 containerd[1668]: time="2026-01-23T17:33:14.953092087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686778f54-7m9rh,Uid:0cda543d-2a18-4d2d-aa42-b11c9c8288f8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0878983ec52cbda9cbb264b2767cdad99b26de8087afced876bb1574a4e1cd35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.953341 kubelet[2951]: E0123 17:33:14.953296 2951 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0878983ec52cbda9cbb264b2767cdad99b26de8087afced876bb1574a4e1cd35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.953393 kubelet[2951]: E0123 17:33:14.953352 2951 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0878983ec52cbda9cbb264b2767cdad99b26de8087afced876bb1574a4e1cd35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" Jan 23 17:33:14.953393 kubelet[2951]: E0123 17:33:14.953370 2951 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0878983ec52cbda9cbb264b2767cdad99b26de8087afced876bb1574a4e1cd35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" Jan 23 17:33:14.953446 kubelet[2951]: E0123 17:33:14.953420 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6686778f54-7m9rh_calico-apiserver(0cda543d-2a18-4d2d-aa42-b11c9c8288f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6686778f54-7m9rh_calico-apiserver(0cda543d-2a18-4d2d-aa42-b11c9c8288f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0878983ec52cbda9cbb264b2767cdad99b26de8087afced876bb1574a4e1cd35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:33:14.956463 containerd[1668]: time="2026-01-23T17:33:14.956425023Z" level=error msg="Failed to destroy network for sandbox \"fba9977c81bf5fc31f198285b663961dd28dc25179e8c64a1f2a2acd3ce4a297\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.958422 containerd[1668]: time="2026-01-23T17:33:14.958368433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b5b6594f-vrvw9,Uid:e5dae696-8fe0-4228-b217-55297601b045,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba9977c81bf5fc31f198285b663961dd28dc25179e8c64a1f2a2acd3ce4a297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.958703 kubelet[2951]: E0123 17:33:14.958670 2951 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba9977c81bf5fc31f198285b663961dd28dc25179e8c64a1f2a2acd3ce4a297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:33:14.958773 kubelet[2951]: E0123 17:33:14.958722 2951 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba9977c81bf5fc31f198285b663961dd28dc25179e8c64a1f2a2acd3ce4a297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75b5b6594f-vrvw9" Jan 23 17:33:14.958773 kubelet[2951]: E0123 17:33:14.958742 2951 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba9977c81bf5fc31f198285b663961dd28dc25179e8c64a1f2a2acd3ce4a297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75b5b6594f-vrvw9" Jan 23 17:33:14.958824 kubelet[2951]: E0123 17:33:14.958798 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75b5b6594f-vrvw9_calico-system(e5dae696-8fe0-4228-b217-55297601b045)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75b5b6594f-vrvw9_calico-system(e5dae696-8fe0-4228-b217-55297601b045)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fba9977c81bf5fc31f198285b663961dd28dc25179e8c64a1f2a2acd3ce4a297\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75b5b6594f-vrvw9" podUID="e5dae696-8fe0-4228-b217-55297601b045" Jan 23 17:33:15.063433 containerd[1668]: time="2026-01-23T17:33:15.062478743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 17:33:19.437930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632042406.mount: Deactivated successfully. Jan 23 17:33:19.455918 containerd[1668]: time="2026-01-23T17:33:19.455866415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:19.457354 containerd[1668]: time="2026-01-23T17:33:19.457188542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 23 17:33:19.458171 containerd[1668]: time="2026-01-23T17:33:19.458130706Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:19.460285 containerd[1668]: time="2026-01-23T17:33:19.460248117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:33:19.461000 containerd[1668]: time="2026-01-23T17:33:19.460871440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.398343337s" Jan 23 17:33:19.461000 containerd[1668]: time="2026-01-23T17:33:19.460901080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 17:33:19.477216 containerd[1668]: time="2026-01-23T17:33:19.477175600Z" level=info msg="CreateContainer within sandbox \"c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 17:33:19.485800 containerd[1668]: time="2026-01-23T17:33:19.485185759Z" level=info msg="Container e0323da16a2ff6f3c34b5328c8342bdf5d19941ed76b47cf6689da2becceb130: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:33:19.494415 containerd[1668]: time="2026-01-23T17:33:19.494376804Z" level=info msg="CreateContainer within sandbox \"c59406a155eff69e1c7c3147b4c7c0f2029a62c7cf90d3bfb4ddf5b432a9cb58\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0323da16a2ff6f3c34b5328c8342bdf5d19941ed76b47cf6689da2becceb130\"" Jan 23 17:33:19.494994 containerd[1668]: time="2026-01-23T17:33:19.494964167Z" level=info msg="StartContainer for \"e0323da16a2ff6f3c34b5328c8342bdf5d19941ed76b47cf6689da2becceb130\"" Jan 23 17:33:19.497701 containerd[1668]: time="2026-01-23T17:33:19.497671220Z" level=info msg="connecting to shim e0323da16a2ff6f3c34b5328c8342bdf5d19941ed76b47cf6689da2becceb130" address="unix:///run/containerd/s/296ac1a804ce57877b98398516221dab8fb70a96d4a43154984b6f959837ec8f" protocol=ttrpc version=3 Jan 23 17:33:19.518960 systemd[1]: Started cri-containerd-e0323da16a2ff6f3c34b5328c8342bdf5d19941ed76b47cf6689da2becceb130.scope - libcontainer container e0323da16a2ff6f3c34b5328c8342bdf5d19941ed76b47cf6689da2becceb130. Jan 23 17:33:19.574000 audit: BPF prog-id=172 op=LOAD Jan 23 17:33:19.575791 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 23 17:33:19.575837 kernel: audit: type=1334 audit(1769189599.574:572): prog-id=172 op=LOAD Jan 23 17:33:19.574000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3484 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:19.579519 kernel: audit: type=1300 audit(1769189599.574:572): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3484 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:19.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333233646131366132666636663363333462353332386338333432 Jan 23 17:33:19.574000 audit: BPF prog-id=173 op=LOAD Jan 23 17:33:19.583426 kernel: audit: type=1327 audit(1769189599.574:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333233646131366132666636663363333462353332386338333432 Jan 23 17:33:19.583462 kernel: audit: type=1334 audit(1769189599.574:573): prog-id=173 op=LOAD Jan 23 17:33:19.574000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3484 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:19.586770 kernel: audit: type=1300 audit(1769189599.574:573): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3484 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:19.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333233646131366132666636663363333462353332386338333432 Jan 23 17:33:19.590064 kernel: audit: type=1327 audit(1769189599.574:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333233646131366132666636663363333462353332386338333432 Jan 23 17:33:19.575000 audit: BPF prog-id=173 op=UNLOAD Jan 23 17:33:19.590925 kernel: audit: type=1334 audit(1769189599.575:574): prog-id=173 op=UNLOAD Jan 23 17:33:19.575000 audit[4032]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:19.594130 kernel: audit: type=1300 audit(1769189599.575:574): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:19.594220 kernel: audit: type=1327 audit(1769189599.575:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333233646131366132666636663363333462353332386338333432 Jan 23 17:33:19.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333233646131366132666636663363333462353332386338333432 Jan 23 17:33:19.575000 audit: BPF prog-id=172 op=UNLOAD Jan 23 17:33:19.597935 kernel: audit: type=1334 audit(1769189599.575:575): prog-id=172 op=UNLOAD Jan 23 17:33:19.575000 audit[4032]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3484 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:19.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333233646131366132666636663363333462353332386338333432 Jan 23 17:33:19.575000 audit: BPF prog-id=174 op=LOAD Jan 23 17:33:19.575000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3484 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:19.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333233646131366132666636663363333462353332386338333432 Jan 23 17:33:19.613739 containerd[1668]: time="2026-01-23T17:33:19.613691549Z" level=info msg="StartContainer for \"e0323da16a2ff6f3c34b5328c8342bdf5d19941ed76b47cf6689da2becceb130\" returns successfully" Jan 23 17:33:19.753679 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 17:33:19.754104 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 17:33:19.922997 kubelet[2951]: I0123 17:33:19.922555 2951 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5dae696-8fe0-4228-b217-55297601b045-whisker-ca-bundle\") pod \"e5dae696-8fe0-4228-b217-55297601b045\" (UID: \"e5dae696-8fe0-4228-b217-55297601b045\") " Jan 23 17:33:19.922997 kubelet[2951]: I0123 17:33:19.922602 2951 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjstv\" (UniqueName: \"kubernetes.io/projected/e5dae696-8fe0-4228-b217-55297601b045-kube-api-access-vjstv\") pod \"e5dae696-8fe0-4228-b217-55297601b045\" (UID: \"e5dae696-8fe0-4228-b217-55297601b045\") " Jan 23 17:33:19.922997 kubelet[2951]: I0123 17:33:19.922633 2951 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e5dae696-8fe0-4228-b217-55297601b045-whisker-backend-key-pair\") pod \"e5dae696-8fe0-4228-b217-55297601b045\" (UID: \"e5dae696-8fe0-4228-b217-55297601b045\") " Jan 23 17:33:19.923357 kubelet[2951]: I0123 17:33:19.923248 2951 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5dae696-8fe0-4228-b217-55297601b045-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e5dae696-8fe0-4228-b217-55297601b045" (UID: "e5dae696-8fe0-4228-b217-55297601b045"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:33:19.925364 kubelet[2951]: I0123 17:33:19.925337 2951 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5dae696-8fe0-4228-b217-55297601b045-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e5dae696-8fe0-4228-b217-55297601b045" (UID: "e5dae696-8fe0-4228-b217-55297601b045"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:33:19.925738 kubelet[2951]: I0123 17:33:19.925702 2951 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5dae696-8fe0-4228-b217-55297601b045-kube-api-access-vjstv" (OuterVolumeSpecName: "kube-api-access-vjstv") pod "e5dae696-8fe0-4228-b217-55297601b045" (UID: "e5dae696-8fe0-4228-b217-55297601b045"). InnerVolumeSpecName "kube-api-access-vjstv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:33:20.023993 kubelet[2951]: I0123 17:33:20.023850 2951 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e5dae696-8fe0-4228-b217-55297601b045-whisker-backend-key-pair\") on node \"ci-4547-1-0-4-2c8b61c80e\" DevicePath \"\"" Jan 23 17:33:20.023993 kubelet[2951]: I0123 17:33:20.023884 2951 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5dae696-8fe0-4228-b217-55297601b045-whisker-ca-bundle\") on node \"ci-4547-1-0-4-2c8b61c80e\" DevicePath \"\"" Jan 23 17:33:20.023993 kubelet[2951]: I0123 17:33:20.023894 2951 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vjstv\" (UniqueName: \"kubernetes.io/projected/e5dae696-8fe0-4228-b217-55297601b045-kube-api-access-vjstv\") on node \"ci-4547-1-0-4-2c8b61c80e\" DevicePath \"\"" Jan 23 17:33:20.081814 systemd[1]: Removed slice kubepods-besteffort-pode5dae696_8fe0_4228_b217_55297601b045.slice - libcontainer container kubepods-besteffort-pode5dae696_8fe0_4228_b217_55297601b045.slice. Jan 23 17:33:20.098995 kubelet[2951]: I0123 17:33:20.098923 2951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n9hxr" podStartSLOduration=1.147335804 podStartE2EDuration="20.09890485s" podCreationTimestamp="2026-01-23 17:33:00 +0000 UTC" firstStartedPulling="2026-01-23 17:33:00.510055477 +0000 UTC m=+21.635361011" lastFinishedPulling="2026-01-23 17:33:19.461624523 +0000 UTC m=+40.586930057" observedRunningTime="2026-01-23 17:33:20.098869569 +0000 UTC m=+41.224175103" watchObservedRunningTime="2026-01-23 17:33:20.09890485 +0000 UTC m=+41.224210344" Jan 23 17:33:20.171704 systemd[1]: Created slice kubepods-besteffort-podde63b504_5477_4705_aea8_65396b064e08.slice - libcontainer container kubepods-besteffort-podde63b504_5477_4705_aea8_65396b064e08.slice. Jan 23 17:33:20.225770 kubelet[2951]: I0123 17:33:20.225710 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de63b504-5477-4705-aea8-65396b064e08-whisker-ca-bundle\") pod \"whisker-589548c48f-dtbrr\" (UID: \"de63b504-5477-4705-aea8-65396b064e08\") " pod="calico-system/whisker-589548c48f-dtbrr" Jan 23 17:33:20.225910 kubelet[2951]: I0123 17:33:20.225794 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qtb\" (UniqueName: \"kubernetes.io/projected/de63b504-5477-4705-aea8-65396b064e08-kube-api-access-67qtb\") pod \"whisker-589548c48f-dtbrr\" (UID: \"de63b504-5477-4705-aea8-65396b064e08\") " pod="calico-system/whisker-589548c48f-dtbrr" Jan 23 17:33:20.225910 kubelet[2951]: I0123 17:33:20.225875 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de63b504-5477-4705-aea8-65396b064e08-whisker-backend-key-pair\") pod \"whisker-589548c48f-dtbrr\" (UID: \"de63b504-5477-4705-aea8-65396b064e08\") " pod="calico-system/whisker-589548c48f-dtbrr" Jan 23 17:33:20.440072 systemd[1]: var-lib-kubelet-pods-e5dae696\x2d8fe0\x2d4228\x2db217\x2d55297601b045-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvjstv.mount: Deactivated successfully. Jan 23 17:33:20.440162 systemd[1]: var-lib-kubelet-pods-e5dae696\x2d8fe0\x2d4228\x2db217\x2d55297601b045-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 17:33:20.475385 containerd[1668]: time="2026-01-23T17:33:20.475328176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589548c48f-dtbrr,Uid:de63b504-5477-4705-aea8-65396b064e08,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:20.608234 systemd-networkd[1580]: cali9615b3a57c6: Link UP Jan 23 17:33:20.608591 systemd-networkd[1580]: cali9615b3a57c6: Gained carrier Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.500 [INFO][4123] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.521 [INFO][4123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0 whisker-589548c48f- calico-system de63b504-5477-4705-aea8-65396b064e08 887 0 2026-01-23 17:33:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:589548c48f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-1-0-4-2c8b61c80e whisker-589548c48f-dtbrr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9615b3a57c6 [] [] }} ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Namespace="calico-system" Pod="whisker-589548c48f-dtbrr" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.521 [INFO][4123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Namespace="calico-system" Pod="whisker-589548c48f-dtbrr" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.567 [INFO][4137] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" HandleID="k8s-pod-network.9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.567 [INFO][4137] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" HandleID="k8s-pod-network.9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cdf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-1-0-4-2c8b61c80e", "pod":"whisker-589548c48f-dtbrr", "timestamp":"2026-01-23 17:33:20.567298627 +0000 UTC"}, Hostname:"ci-4547-1-0-4-2c8b61c80e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.567 [INFO][4137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.567 [INFO][4137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.567 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-4-2c8b61c80e' Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.577 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.582 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.585 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.587 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.589 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.589 [INFO][4137] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.591 [INFO][4137] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.595 [INFO][4137] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.599 [INFO][4137] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.193/26] block=192.168.123.192/26 handle="k8s-pod-network.9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.599 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.193/26] handle="k8s-pod-network.9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.599 [INFO][4137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:33:20.624619 containerd[1668]: 2026-01-23 17:33:20.599 [INFO][4137] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.193/26] IPv6=[] ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" HandleID="k8s-pod-network.9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" Jan 23 17:33:20.625204 containerd[1668]: 2026-01-23 17:33:20.602 [INFO][4123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Namespace="calico-system" Pod="whisker-589548c48f-dtbrr" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0", GenerateName:"whisker-589548c48f-", Namespace:"calico-system", SelfLink:"", UID:"de63b504-5477-4705-aea8-65396b064e08", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 33, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"589548c48f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"", Pod:"whisker-589548c48f-dtbrr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9615b3a57c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:20.625204 containerd[1668]: 2026-01-23 17:33:20.602 [INFO][4123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.193/32] ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Namespace="calico-system" Pod="whisker-589548c48f-dtbrr" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" Jan 23 17:33:20.625204 containerd[1668]: 2026-01-23 17:33:20.602 [INFO][4123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9615b3a57c6 ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Namespace="calico-system" Pod="whisker-589548c48f-dtbrr" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" Jan 23 17:33:20.625204 containerd[1668]: 2026-01-23 17:33:20.609 [INFO][4123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Namespace="calico-system" Pod="whisker-589548c48f-dtbrr" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" Jan 23 17:33:20.625204 containerd[1668]: 2026-01-23 17:33:20.610 [INFO][4123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Namespace="calico-system" Pod="whisker-589548c48f-dtbrr" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0", GenerateName:"whisker-589548c48f-", Namespace:"calico-system", SelfLink:"", UID:"de63b504-5477-4705-aea8-65396b064e08", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 33, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"589548c48f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e", Pod:"whisker-589548c48f-dtbrr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9615b3a57c6", MAC:"2a:f3:cc:3a:73:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:20.625204 containerd[1668]: 2026-01-23 17:33:20.620 [INFO][4123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" Namespace="calico-system" Pod="whisker-589548c48f-dtbrr" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-whisker--589548c48f--dtbrr-eth0" Jan 23 17:33:20.644446 containerd[1668]: time="2026-01-23T17:33:20.644389525Z" level=info msg="connecting to shim 9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e" address="unix:///run/containerd/s/099dfa97263dddec59a6582dc5152b9b4736ccf991efcb0c18138f1be984d6e1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:20.673230 systemd[1]: Started cri-containerd-9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e.scope - libcontainer container 9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e. Jan 23 17:33:20.683000 audit: BPF prog-id=175 op=LOAD Jan 23 17:33:20.683000 audit: BPF prog-id=176 op=LOAD Jan 23 17:33:20.683000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4163 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:20.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656131353863373834336234356363353666366335626636623634 Jan 23 17:33:20.683000 audit: BPF prog-id=176 op=UNLOAD Jan 23 17:33:20.683000 audit[4174]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4163 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:20.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656131353863373834336234356363353666366335626636623634 Jan 23 17:33:20.683000 audit: BPF prog-id=177 op=LOAD Jan 23 17:33:20.683000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4163 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:20.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656131353863373834336234356363353666366335626636623634 Jan 23 17:33:20.683000 audit: BPF prog-id=178 op=LOAD Jan 23 17:33:20.683000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4163 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:20.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656131353863373834336234356363353666366335626636623634 Jan 23 17:33:20.683000 audit: BPF prog-id=178 op=UNLOAD Jan 23 17:33:20.683000 audit[4174]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4163 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:20.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656131353863373834336234356363353666366335626636623634 Jan 23 17:33:20.683000 audit: BPF prog-id=177 op=UNLOAD Jan 23 17:33:20.683000 audit[4174]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4163 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:20.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656131353863373834336234356363353666366335626636623634 Jan 23 17:33:20.683000 audit: BPF prog-id=179 op=LOAD Jan 23 17:33:20.683000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4163 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:20.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656131353863373834336234356363353666366335626636623634 Jan 23 17:33:20.704645 containerd[1668]: time="2026-01-23T17:33:20.704532420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589548c48f-dtbrr,Uid:de63b504-5477-4705-aea8-65396b064e08,Namespace:calico-system,Attempt:0,} returns sandbox id \"9cea158c7843b45cc56f6c5bf6b64d0ab3d07f6bf5fa05f650d3fedeb92f410e\"" Jan 23 17:33:20.706268 containerd[1668]: time="2026-01-23T17:33:20.706236269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:33:20.951525 kubelet[2951]: I0123 17:33:20.951487 2951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5dae696-8fe0-4228-b217-55297601b045" path="/var/lib/kubelet/pods/e5dae696-8fe0-4228-b217-55297601b045/volumes" Jan 23 17:33:21.041253 containerd[1668]: time="2026-01-23T17:33:21.041041711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:21.042532 containerd[1668]: time="2026-01-23T17:33:21.042466438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:33:21.042611 containerd[1668]: time="2026-01-23T17:33:21.042505318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:21.043062 kubelet[2951]: E0123 17:33:21.042813 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:33:21.043062 kubelet[2951]: E0123 17:33:21.042864 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:33:21.043847 kubelet[2951]: E0123 17:33:21.043008 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:acf1b8a947d84100b9ec5aa90fd31b84,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67qtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589548c48f-dtbrr_calico-system(de63b504-5477-4705-aea8-65396b064e08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:21.044930 containerd[1668]: time="2026-01-23T17:33:21.044884050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:33:21.374654 containerd[1668]: time="2026-01-23T17:33:21.374534187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:21.375867 containerd[1668]: time="2026-01-23T17:33:21.375819073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:33:21.375982 containerd[1668]: time="2026-01-23T17:33:21.375855674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:21.376157 kubelet[2951]: E0123 17:33:21.376117 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:33:21.376218 kubelet[2951]: E0123 17:33:21.376168 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:33:21.376356 kubelet[2951]: E0123 17:33:21.376279 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67qtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589548c48f-dtbrr_calico-system(de63b504-5477-4705-aea8-65396b064e08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:21.377877 kubelet[2951]: E0123 17:33:21.377462 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:33:21.957960 systemd-networkd[1580]: cali9615b3a57c6: Gained IPv6LL Jan 23 17:33:22.081101 kubelet[2951]: E0123 17:33:22.081056 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:33:22.100000 audit[4331]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:22.100000 audit[4331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffef7316b0 a2=0 a3=1 items=0 ppid=3062 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:22.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:22.109000 audit[4331]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:22.109000 audit[4331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffef7316b0 a2=0 a3=1 items=0 ppid=3062 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:22.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:23.023773 kubelet[2951]: I0123 17:33:23.023635 2951 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:33:23.046000 audit[4357]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:23.046000 audit[4357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe6c62670 a2=0 a3=1 items=0 ppid=3062 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.046000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:23.052000 audit[4357]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:23.052000 audit[4357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe6c62670 a2=0 a3=1 items=0 ppid=3062 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:23.487000 audit: BPF prog-id=180 op=LOAD Jan 23 17:33:23.487000 audit[4422]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf65a928 a2=98 a3=ffffcf65a918 items=0 ppid=4374 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.487000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:33:23.487000 audit: BPF prog-id=180 op=UNLOAD Jan 23 17:33:23.487000 audit[4422]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcf65a8f8 a3=0 items=0 ppid=4374 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.487000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:33:23.487000 audit: BPF prog-id=181 op=LOAD Jan 23 17:33:23.487000 audit[4422]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf65a7d8 a2=74 a3=95 items=0 ppid=4374 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.487000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:33:23.487000 audit: BPF prog-id=181 op=UNLOAD Jan 23 17:33:23.487000 audit[4422]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4374 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.487000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:33:23.487000 audit: BPF prog-id=182 op=LOAD Jan 23 17:33:23.487000 audit[4422]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf65a808 a2=40 a3=ffffcf65a838 items=0 ppid=4374 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.487000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:33:23.487000 audit: BPF prog-id=182 op=UNLOAD Jan 23 17:33:23.487000 audit[4422]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffcf65a838 items=0 ppid=4374 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.487000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:33:23.489000 audit: BPF prog-id=183 op=LOAD Jan 23 17:33:23.489000 audit[4423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffbc66d18 a2=98 a3=fffffbc66d08 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.489000 audit: BPF prog-id=183 op=UNLOAD Jan 23 17:33:23.489000 audit[4423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffbc66ce8 a3=0 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.489000 audit: BPF prog-id=184 op=LOAD Jan 23 17:33:23.489000 audit[4423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffbc669a8 a2=74 a3=95 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.489000 audit: BPF prog-id=184 op=UNLOAD Jan 23 17:33:23.489000 audit[4423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.489000 audit: BPF prog-id=185 op=LOAD Jan 23 17:33:23.489000 audit[4423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffbc66a08 a2=94 a3=2 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.489000 audit: BPF prog-id=185 op=UNLOAD Jan 23 17:33:23.489000 audit[4423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.589000 audit: BPF prog-id=186 op=LOAD Jan 23 17:33:23.589000 audit[4423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffbc669c8 a2=40 a3=fffffbc669f8 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.589000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.589000 audit: BPF prog-id=186 op=UNLOAD Jan 23 17:33:23.589000 audit[4423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffffbc669f8 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.589000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.599000 audit: BPF prog-id=187 op=LOAD Jan 23 17:33:23.599000 audit[4423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffbc669d8 a2=94 a3=4 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.599000 audit: BPF prog-id=187 op=UNLOAD Jan 23 17:33:23.599000 audit[4423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.599000 audit: BPF prog-id=188 op=LOAD Jan 23 17:33:23.599000 audit[4423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffbc66818 a2=94 a3=5 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.599000 audit: BPF prog-id=188 op=UNLOAD Jan 23 17:33:23.599000 audit[4423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.599000 audit: BPF prog-id=189 op=LOAD Jan 23 17:33:23.599000 audit[4423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffbc66a48 a2=94 a3=6 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.599000 audit: BPF prog-id=189 op=UNLOAD Jan 23 17:33:23.599000 audit[4423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.600000 audit: BPF prog-id=190 op=LOAD Jan 23 17:33:23.600000 audit[4423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffbc66218 a2=94 a3=83 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.600000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.600000 audit: BPF prog-id=191 op=LOAD Jan 23 17:33:23.600000 audit[4423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffffbc65fd8 a2=94 a3=2 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.600000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.600000 audit: BPF prog-id=191 op=UNLOAD Jan 23 17:33:23.600000 audit[4423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.600000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.600000 audit: BPF prog-id=190 op=UNLOAD Jan 23 17:33:23.600000 audit[4423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=427c620 a3=426fb00 items=0 ppid=4374 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.600000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:33:23.609000 audit: BPF prog-id=192 op=LOAD Jan 23 17:33:23.609000 audit[4426]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6223db8 a2=98 a3=fffff6223da8 items=0 ppid=4374 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.609000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:33:23.609000 audit: BPF prog-id=192 op=UNLOAD Jan 23 17:33:23.609000 audit[4426]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff6223d88 a3=0 items=0 ppid=4374 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.609000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:33:23.609000 audit: BPF prog-id=193 op=LOAD Jan 23 17:33:23.609000 audit[4426]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6223c68 a2=74 a3=95 items=0 ppid=4374 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.609000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:33:23.609000 audit: BPF prog-id=193 op=UNLOAD Jan 23 17:33:23.609000 audit[4426]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4374 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.609000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:33:23.609000 audit: BPF prog-id=194 op=LOAD Jan 23 17:33:23.609000 audit[4426]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6223c98 a2=40 a3=fffff6223cc8 items=0 ppid=4374 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.609000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:33:23.609000 audit: BPF prog-id=194 op=UNLOAD Jan 23 17:33:23.609000 audit[4426]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff6223cc8 items=0 ppid=4374 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.609000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:33:23.664538 systemd-networkd[1580]: vxlan.calico: Link UP Jan 23 17:33:23.664549 systemd-networkd[1580]: vxlan.calico: Gained carrier Jan 23 17:33:23.691000 audit: BPF prog-id=195 op=LOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea73c888 a2=98 a3=ffffea73c878 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.691000 audit: BPF prog-id=195 op=UNLOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffea73c858 a3=0 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.691000 audit: BPF prog-id=196 op=LOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea73c568 a2=74 a3=95 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.691000 audit: BPF prog-id=196 op=UNLOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.691000 audit: BPF prog-id=197 op=LOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea73c5c8 a2=94 a3=2 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.691000 audit: BPF prog-id=197 op=UNLOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.691000 audit: BPF prog-id=198 op=LOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea73c448 a2=40 a3=ffffea73c478 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.691000 audit: BPF prog-id=198 op=UNLOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffea73c478 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.691000 audit: BPF prog-id=199 op=LOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea73c598 a2=94 a3=b7 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.691000 audit: BPF prog-id=199 op=UNLOAD Jan 23 17:33:23.691000 audit[4454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.691000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.696000 audit: BPF prog-id=200 op=LOAD Jan 23 17:33:23.696000 audit[4454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea73bc48 a2=94 a3=2 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.696000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.696000 audit: BPF prog-id=200 op=UNLOAD Jan 23 17:33:23.696000 audit[4454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.696000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.696000 audit: BPF prog-id=201 op=LOAD Jan 23 17:33:23.696000 audit[4454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea73bdd8 a2=94 a3=30 items=0 ppid=4374 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.696000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:33:23.699000 audit: BPF prog-id=202 op=LOAD Jan 23 17:33:23.699000 audit[4463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc81db2c8 a2=98 a3=ffffc81db2b8 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.699000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.699000 audit: BPF prog-id=202 op=UNLOAD Jan 23 17:33:23.699000 audit[4463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc81db298 a3=0 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.699000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.700000 audit: BPF prog-id=203 op=LOAD Jan 23 17:33:23.700000 audit[4463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc81daf58 a2=74 a3=95 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.700000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.700000 audit: BPF prog-id=203 op=UNLOAD Jan 23 17:33:23.700000 audit[4463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.700000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.700000 audit: BPF prog-id=204 op=LOAD Jan 23 17:33:23.700000 audit[4463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc81dafb8 a2=94 a3=2 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.700000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.700000 audit: BPF prog-id=204 op=UNLOAD Jan 23 17:33:23.700000 audit[4463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.700000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.802000 audit: BPF prog-id=205 op=LOAD Jan 23 17:33:23.802000 audit[4463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc81daf78 a2=40 a3=ffffc81dafa8 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.803000 audit: BPF prog-id=205 op=UNLOAD Jan 23 17:33:23.803000 audit[4463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc81dafa8 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.812000 audit: BPF prog-id=206 op=LOAD Jan 23 17:33:23.812000 audit[4463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc81daf88 a2=94 a3=4 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.812000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.813000 audit: BPF prog-id=206 op=UNLOAD Jan 23 17:33:23.813000 audit[4463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.813000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.813000 audit: BPF prog-id=207 op=LOAD Jan 23 17:33:23.813000 audit[4463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc81dadc8 a2=94 a3=5 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.813000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.813000 audit: BPF prog-id=207 op=UNLOAD Jan 23 17:33:23.813000 audit[4463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.813000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.813000 audit: BPF prog-id=208 op=LOAD Jan 23 17:33:23.813000 audit[4463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc81daff8 a2=94 a3=6 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.813000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.813000 audit: BPF prog-id=208 op=UNLOAD Jan 23 17:33:23.813000 audit[4463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.813000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.814000 audit: BPF prog-id=209 op=LOAD Jan 23 17:33:23.814000 audit[4463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc81da7c8 a2=94 a3=83 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.814000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.814000 audit: BPF prog-id=210 op=LOAD Jan 23 17:33:23.814000 audit[4463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc81da588 a2=94 a3=2 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.814000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.815000 audit: BPF prog-id=210 op=UNLOAD Jan 23 17:33:23.815000 audit[4463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.815000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.815000 audit: BPF prog-id=209 op=UNLOAD Jan 23 17:33:23.815000 audit[4463]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=18de5620 a3=18dd8b00 items=0 ppid=4374 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.815000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:33:23.831000 audit: BPF prog-id=201 op=UNLOAD Jan 23 17:33:23.831000 audit[4374]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40009fa980 a2=0 a3=0 items=0 ppid=4222 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.831000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 17:33:23.876000 audit[4485]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4485 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:23.876000 audit[4485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff69912e0 a2=0 a3=ffff8c7b5fa8 items=0 ppid=4374 pid=4485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.876000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:23.877000 audit[4486]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4486 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:23.877000 audit[4486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffcc5b22b0 a2=0 a3=ffffa6805fa8 items=0 ppid=4374 pid=4486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.877000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:23.884000 audit[4484]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4484 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:23.884000 audit[4484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe6900540 a2=0 a3=ffff84e87fa8 items=0 ppid=4374 pid=4484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.884000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:23.888000 audit[4489]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4489 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:23.888000 audit[4489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe1db3320 a2=0 a3=ffff9f61bfa8 items=0 ppid=4374 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:23.888000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:25.605991 systemd-networkd[1580]: vxlan.calico: Gained IPv6LL Jan 23 17:33:25.948940 containerd[1668]: time="2026-01-23T17:33:25.948824386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686778f54-qdppv,Uid:287c72fe-ff92-46ec-9e19-273465100dda,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:33:26.050547 systemd-networkd[1580]: cali262c7edd5dc: Link UP Jan 23 17:33:26.051024 systemd-networkd[1580]: cali262c7edd5dc: Gained carrier Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:25.986 [INFO][4501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0 calico-apiserver-6686778f54- calico-apiserver 287c72fe-ff92-46ec-9e19-273465100dda 821 0 2026-01-23 17:32:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6686778f54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-1-0-4-2c8b61c80e calico-apiserver-6686778f54-qdppv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali262c7edd5dc [] [] }} ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-qdppv" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:25.986 [INFO][4501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-qdppv" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.008 [INFO][4515] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" HandleID="k8s-pod-network.60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.008 [INFO][4515] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" HandleID="k8s-pod-network.60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400041e010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-1-0-4-2c8b61c80e", "pod":"calico-apiserver-6686778f54-qdppv", "timestamp":"2026-01-23 17:33:26.008277678 +0000 UTC"}, Hostname:"ci-4547-1-0-4-2c8b61c80e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.008 [INFO][4515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.008 [INFO][4515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.008 [INFO][4515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-4-2c8b61c80e' Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.018 [INFO][4515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.025 [INFO][4515] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.030 [INFO][4515] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.032 [INFO][4515] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.035 [INFO][4515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.035 [INFO][4515] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.036 [INFO][4515] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494 Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.040 [INFO][4515] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.046 [INFO][4515] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.194/26] block=192.168.123.192/26 handle="k8s-pod-network.60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.046 [INFO][4515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.194/26] handle="k8s-pod-network.60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.046 [INFO][4515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:33:26.062329 containerd[1668]: 2026-01-23 17:33:26.046 [INFO][4515] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.194/26] IPv6=[] ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" HandleID="k8s-pod-network.60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" Jan 23 17:33:26.062875 containerd[1668]: 2026-01-23 17:33:26.048 [INFO][4501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-qdppv" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0", GenerateName:"calico-apiserver-6686778f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"287c72fe-ff92-46ec-9e19-273465100dda", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6686778f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"", Pod:"calico-apiserver-6686778f54-qdppv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali262c7edd5dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:26.062875 containerd[1668]: 2026-01-23 17:33:26.048 [INFO][4501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.194/32] ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-qdppv" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" Jan 23 17:33:26.062875 containerd[1668]: 2026-01-23 17:33:26.048 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali262c7edd5dc ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-qdppv" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" Jan 23 17:33:26.062875 containerd[1668]: 2026-01-23 17:33:26.050 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-qdppv" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" Jan 23 17:33:26.062875 containerd[1668]: 2026-01-23 17:33:26.051 [INFO][4501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-qdppv" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0", GenerateName:"calico-apiserver-6686778f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"287c72fe-ff92-46ec-9e19-273465100dda", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6686778f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494", Pod:"calico-apiserver-6686778f54-qdppv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali262c7edd5dc", MAC:"8a:bc:2a:c5:bd:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:26.062875 containerd[1668]: 2026-01-23 17:33:26.060 [INFO][4501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-qdppv" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--qdppv-eth0" Jan 23 17:33:26.071000 audit[4532]: NETFILTER_CFG table=filter:125 family=2 entries=50 op=nft_register_chain pid=4532 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:26.075919 kernel: kauditd_printk_skb: 237 callbacks suppressed Jan 23 17:33:26.076065 kernel: audit: type=1325 audit(1769189606.071:655): table=filter:125 family=2 entries=50 op=nft_register_chain pid=4532 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:26.076099 kernel: audit: type=1300 audit(1769189606.071:655): arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffd34300b0 a2=0 a3=ffffa3173fa8 items=0 ppid=4374 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.071000 audit[4532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffd34300b0 a2=0 a3=ffffa3173fa8 items=0 ppid=4374 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.071000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:26.083604 kernel: audit: type=1327 audit(1769189606.071:655): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:26.084430 containerd[1668]: time="2026-01-23T17:33:26.083965209Z" level=info msg="connecting to shim 60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494" address="unix:///run/containerd/s/e331ae86112cd7411ecaeedb137fe596710dc940d3301cbf78650bcca521ed4b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:26.112177 systemd[1]: Started cri-containerd-60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494.scope - libcontainer container 60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494. Jan 23 17:33:26.122000 audit: BPF prog-id=211 op=LOAD Jan 23 17:33:26.123000 audit: BPF prog-id=212 op=LOAD Jan 23 17:33:26.125481 kernel: audit: type=1334 audit(1769189606.122:656): prog-id=211 op=LOAD Jan 23 17:33:26.125540 kernel: audit: type=1334 audit(1769189606.123:657): prog-id=212 op=LOAD Jan 23 17:33:26.123000 audit[4553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000170180 a2=98 a3=0 items=0 ppid=4541 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.129030 kernel: audit: type=1300 audit(1769189606.123:657): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000170180 a2=98 a3=0 items=0 ppid=4541 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.129235 kernel: audit: type=1327 audit(1769189606.123:657): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663864653436343934386362326538623361303961386164303361 Jan 23 17:33:26.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663864653436343934386362326538623361303961386164303361 Jan 23 17:33:26.124000 audit: BPF prog-id=212 op=UNLOAD Jan 23 17:33:26.133457 kernel: audit: type=1334 audit(1769189606.124:658): prog-id=212 op=UNLOAD Jan 23 17:33:26.133539 kernel: audit: type=1300 audit(1769189606.124:658): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.124000 audit[4553]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663864653436343934386362326538623361303961386164303361 Jan 23 17:33:26.139389 kernel: audit: type=1327 audit(1769189606.124:658): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663864653436343934386362326538623361303961386164303361 Jan 23 17:33:26.124000 audit: BPF prog-id=213 op=LOAD Jan 23 17:33:26.124000 audit[4553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001703e8 a2=98 a3=0 items=0 ppid=4541 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663864653436343934386362326538623361303961386164303361 Jan 23 17:33:26.124000 audit: BPF prog-id=214 op=LOAD Jan 23 17:33:26.124000 audit[4553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000170168 a2=98 a3=0 items=0 ppid=4541 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663864653436343934386362326538623361303961386164303361 Jan 23 17:33:26.128000 audit: BPF prog-id=214 op=UNLOAD Jan 23 17:33:26.128000 audit[4553]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663864653436343934386362326538623361303961386164303361 Jan 23 17:33:26.128000 audit: BPF prog-id=213 op=UNLOAD Jan 23 17:33:26.128000 audit[4553]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663864653436343934386362326538623361303961386164303361 Jan 23 17:33:26.128000 audit: BPF prog-id=215 op=LOAD Jan 23 17:33:26.128000 audit[4553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000170648 a2=98 a3=0 items=0 ppid=4541 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:26.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663864653436343934386362326538623361303961386164303361 Jan 23 17:33:26.158463 containerd[1668]: time="2026-01-23T17:33:26.158428255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686778f54-qdppv,Uid:287c72fe-ff92-46ec-9e19-273465100dda,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"60f8de464948cb2e8b3a09a8ad03a7bfb4581aa34b10ee670e437e12b3eae494\"" Jan 23 17:33:26.159744 containerd[1668]: time="2026-01-23T17:33:26.159718621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:33:26.491456 containerd[1668]: time="2026-01-23T17:33:26.491339608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:26.492928 containerd[1668]: time="2026-01-23T17:33:26.492856975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:33:26.492993 containerd[1668]: time="2026-01-23T17:33:26.492938376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:26.493363 kubelet[2951]: E0123 17:33:26.493121 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:26.493363 kubelet[2951]: E0123 17:33:26.493170 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:26.493363 kubelet[2951]: E0123 17:33:26.493306 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btn72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6686778f54-qdppv_calico-apiserver(287c72fe-ff92-46ec-9e19-273465100dda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:26.494519 kubelet[2951]: E0123 17:33:26.494485 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:33:27.090018 kubelet[2951]: E0123 17:33:27.089961 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:33:27.110000 audit[4580]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:27.110000 audit[4580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdd6477d0 a2=0 a3=1 items=0 ppid=3062 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:27.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:27.118000 audit[4580]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:27.118000 audit[4580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdd6477d0 a2=0 a3=1 items=0 ppid=3062 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:27.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:27.397960 systemd-networkd[1580]: cali262c7edd5dc: Gained IPv6LL Jan 23 17:33:27.948863 containerd[1668]: time="2026-01-23T17:33:27.948714997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686778f54-7m9rh,Uid:0cda543d-2a18-4d2d-aa42-b11c9c8288f8,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:33:27.949198 containerd[1668]: time="2026-01-23T17:33:27.948990158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hp496,Uid:49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:27.949198 containerd[1668]: time="2026-01-23T17:33:27.949052039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fwt87,Uid:0dc12ef4-de69-4bd9-8385-4e2f0df15cde,Namespace:kube-system,Attempt:0,}" Jan 23 17:33:27.949198 containerd[1668]: time="2026-01-23T17:33:27.948716277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qcpzs,Uid:96e04b17-1bf7-4d71-9569-39faa03d6952,Namespace:kube-system,Attempt:0,}" Jan 23 17:33:27.949198 containerd[1668]: time="2026-01-23T17:33:27.948721237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfmmc,Uid:decbe37e-6413-4ebb-af5d-fd959613c007,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:28.091716 kubelet[2951]: E0123 17:33:28.091667 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:33:28.109153 systemd-networkd[1580]: cali3b5d6efc025: Link UP Jan 23 17:33:28.110098 systemd-networkd[1580]: cali3b5d6efc025: Gained carrier Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.020 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0 coredns-674b8bbfcf- kube-system 0dc12ef4-de69-4bd9-8385-4e2f0df15cde 822 0 2026-01-23 17:32:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-1-0-4-2c8b61c80e coredns-674b8bbfcf-fwt87 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3b5d6efc025 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwt87" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.020 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwt87" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.051 [INFO][4661] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" HandleID="k8s-pod-network.7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.051 [INFO][4661] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" HandleID="k8s-pod-network.7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400021ea80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-1-0-4-2c8b61c80e", "pod":"coredns-674b8bbfcf-fwt87", "timestamp":"2026-01-23 17:33:28.05134018 +0000 UTC"}, Hostname:"ci-4547-1-0-4-2c8b61c80e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.051 [INFO][4661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.051 [INFO][4661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.051 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-4-2c8b61c80e' Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.063 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.068 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.075 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.077 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.079 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.080 [INFO][4661] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.084 [INFO][4661] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.090 [INFO][4661] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.098 [INFO][4661] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.195/26] block=192.168.123.192/26 handle="k8s-pod-network.7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.099 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.195/26] handle="k8s-pod-network.7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.100 [INFO][4661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:33:28.126608 containerd[1668]: 2026-01-23 17:33:28.100 [INFO][4661] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.195/26] IPv6=[] ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" HandleID="k8s-pod-network.7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" Jan 23 17:33:28.127394 containerd[1668]: 2026-01-23 17:33:28.104 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwt87" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0dc12ef4-de69-4bd9-8385-4e2f0df15cde", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"", Pod:"coredns-674b8bbfcf-fwt87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b5d6efc025", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.127394 containerd[1668]: 2026-01-23 17:33:28.104 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.195/32] ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwt87" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" Jan 23 17:33:28.127394 containerd[1668]: 2026-01-23 17:33:28.105 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b5d6efc025 ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwt87" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" Jan 23 17:33:28.127394 containerd[1668]: 2026-01-23 17:33:28.111 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwt87" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" Jan 23 17:33:28.127394 containerd[1668]: 2026-01-23 17:33:28.112 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwt87" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0dc12ef4-de69-4bd9-8385-4e2f0df15cde", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f", Pod:"coredns-674b8bbfcf-fwt87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b5d6efc025", MAC:"4a:59:f6:70:7a:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.127555 containerd[1668]: 2026-01-23 17:33:28.125 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwt87" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--fwt87-eth0" Jan 23 17:33:28.138000 audit[4707]: NETFILTER_CFG table=filter:128 family=2 entries=46 op=nft_register_chain pid=4707 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:28.138000 audit[4707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffe7990550 a2=0 a3=ffff8140efa8 items=0 ppid=4374 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.138000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:28.149710 containerd[1668]: time="2026-01-23T17:33:28.149662943Z" level=info msg="connecting to shim 7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f" address="unix:///run/containerd/s/5fc592d07a4242abd37584bd8eb7f9cf494b6aa4fa95e9a439ecdbb12de7a0f6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:28.171385 systemd[1]: Started cri-containerd-7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f.scope - libcontainer container 7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f. Jan 23 17:33:28.184000 audit: BPF prog-id=216 op=LOAD Jan 23 17:33:28.185000 audit: BPF prog-id=217 op=LOAD Jan 23 17:33:28.185000 audit[4727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4716 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353438646462666232363861316137326465393465373564656438 Jan 23 17:33:28.185000 audit: BPF prog-id=217 op=UNLOAD Jan 23 17:33:28.185000 audit[4727]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353438646462666232363861316137326465393465373564656438 Jan 23 17:33:28.186000 audit: BPF prog-id=218 op=LOAD Jan 23 17:33:28.186000 audit[4727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4716 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353438646462666232363861316137326465393465373564656438 Jan 23 17:33:28.186000 audit: BPF prog-id=219 op=LOAD Jan 23 17:33:28.186000 audit[4727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4716 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353438646462666232363861316137326465393465373564656438 Jan 23 17:33:28.186000 audit: BPF prog-id=219 op=UNLOAD Jan 23 17:33:28.186000 audit[4727]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353438646462666232363861316137326465393465373564656438 Jan 23 17:33:28.186000 audit: BPF prog-id=218 op=UNLOAD Jan 23 17:33:28.186000 audit[4727]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353438646462666232363861316137326465393465373564656438 Jan 23 17:33:28.186000 audit: BPF prog-id=220 op=LOAD Jan 23 17:33:28.186000 audit[4727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4716 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353438646462666232363861316137326465393465373564656438 Jan 23 17:33:28.202019 systemd-networkd[1580]: cali752f14fe5f7: Link UP Jan 23 17:33:28.202187 systemd-networkd[1580]: cali752f14fe5f7: Gained carrier Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.017 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0 goldmane-666569f655- calico-system 49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded 823 0 2026-01-23 17:32:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-1-0-4-2c8b61c80e goldmane-666569f655-hp496 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali752f14fe5f7 [] [] }} ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Namespace="calico-system" Pod="goldmane-666569f655-hp496" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.017 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Namespace="calico-system" Pod="goldmane-666569f655-hp496" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.062 [INFO][4653] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" HandleID="k8s-pod-network.802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.063 [INFO][4653] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" HandleID="k8s-pod-network.802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012eda0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-1-0-4-2c8b61c80e", "pod":"goldmane-666569f655-hp496", "timestamp":"2026-01-23 17:33:28.062938197 +0000 UTC"}, Hostname:"ci-4547-1-0-4-2c8b61c80e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.063 [INFO][4653] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.101 [INFO][4653] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.101 [INFO][4653] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-4-2c8b61c80e' Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.163 [INFO][4653] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.170 [INFO][4653] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.176 [INFO][4653] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.179 [INFO][4653] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.182 [INFO][4653] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.182 [INFO][4653] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.184 [INFO][4653] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.188 [INFO][4653] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.195 [INFO][4653] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.196/26] block=192.168.123.192/26 handle="k8s-pod-network.802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.195 [INFO][4653] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.196/26] handle="k8s-pod-network.802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.195 [INFO][4653] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:33:28.218593 containerd[1668]: 2026-01-23 17:33:28.195 [INFO][4653] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.196/26] IPv6=[] ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" HandleID="k8s-pod-network.802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" Jan 23 17:33:28.219697 containerd[1668]: 2026-01-23 17:33:28.200 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Namespace="calico-system" Pod="goldmane-666569f655-hp496" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"", Pod:"goldmane-666569f655-hp496", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali752f14fe5f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.219697 containerd[1668]: 2026-01-23 17:33:28.200 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.196/32] ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Namespace="calico-system" Pod="goldmane-666569f655-hp496" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" Jan 23 17:33:28.219697 containerd[1668]: 2026-01-23 17:33:28.200 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali752f14fe5f7 ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Namespace="calico-system" Pod="goldmane-666569f655-hp496" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" Jan 23 17:33:28.219697 containerd[1668]: 2026-01-23 17:33:28.202 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Namespace="calico-system" Pod="goldmane-666569f655-hp496" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" Jan 23 17:33:28.219697 containerd[1668]: 2026-01-23 17:33:28.202 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Namespace="calico-system" Pod="goldmane-666569f655-hp496" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a", Pod:"goldmane-666569f655-hp496", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali752f14fe5f7", MAC:"fe:c7:00:fd:ed:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.219697 containerd[1668]: 2026-01-23 17:33:28.215 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" Namespace="calico-system" Pod="goldmane-666569f655-hp496" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-goldmane--666569f655--hp496-eth0" Jan 23 17:33:28.221427 containerd[1668]: time="2026-01-23T17:33:28.221387135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fwt87,Uid:0dc12ef4-de69-4bd9-8385-4e2f0df15cde,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f\"" Jan 23 17:33:28.225971 containerd[1668]: time="2026-01-23T17:33:28.225744436Z" level=info msg="CreateContainer within sandbox \"7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:33:28.232000 audit[4761]: NETFILTER_CFG table=filter:129 family=2 entries=52 op=nft_register_chain pid=4761 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:28.232000 audit[4761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27556 a0=3 a1=fffffb19abe0 a2=0 a3=ffffae416fa8 items=0 ppid=4374 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.232000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:28.239733 containerd[1668]: time="2026-01-23T17:33:28.239521424Z" level=info msg="Container 27571ce0268173334c169d5b884bfd3f4e2ada7c4d093376b120ababded75d55: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:33:28.245089 containerd[1668]: time="2026-01-23T17:33:28.245040611Z" level=info msg="connecting to shim 802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a" address="unix:///run/containerd/s/965b90b56f72125eb282af9df2f6b58e591f14e609046ad5a918b2f34e698041" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:28.247599 containerd[1668]: time="2026-01-23T17:33:28.247560743Z" level=info msg="CreateContainer within sandbox \"7f548ddbfb268a1a72de94e75ded885a58630eac197de436393895285bdaa92f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27571ce0268173334c169d5b884bfd3f4e2ada7c4d093376b120ababded75d55\"" Jan 23 17:33:28.248073 containerd[1668]: time="2026-01-23T17:33:28.248034025Z" level=info msg="StartContainer for \"27571ce0268173334c169d5b884bfd3f4e2ada7c4d093376b120ababded75d55\"" Jan 23 17:33:28.250393 containerd[1668]: time="2026-01-23T17:33:28.250360437Z" level=info msg="connecting to shim 27571ce0268173334c169d5b884bfd3f4e2ada7c4d093376b120ababded75d55" address="unix:///run/containerd/s/5fc592d07a4242abd37584bd8eb7f9cf494b6aa4fa95e9a439ecdbb12de7a0f6" protocol=ttrpc version=3 Jan 23 17:33:28.270990 systemd[1]: Started cri-containerd-27571ce0268173334c169d5b884bfd3f4e2ada7c4d093376b120ababded75d55.scope - libcontainer container 27571ce0268173334c169d5b884bfd3f4e2ada7c4d093376b120ababded75d55. Jan 23 17:33:28.272120 systemd[1]: Started cri-containerd-802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a.scope - libcontainer container 802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a. Jan 23 17:33:28.287000 audit: BPF prog-id=221 op=LOAD Jan 23 17:33:28.287000 audit: BPF prog-id=222 op=LOAD Jan 23 17:33:28.287000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4716 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237353731636530323638313733333334633136396435623838346266 Jan 23 17:33:28.288000 audit: BPF prog-id=222 op=UNLOAD Jan 23 17:33:28.288000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237353731636530323638313733333334633136396435623838346266 Jan 23 17:33:28.288000 audit: BPF prog-id=223 op=LOAD Jan 23 17:33:28.288000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4716 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237353731636530323638313733333334633136396435623838346266 Jan 23 17:33:28.288000 audit: BPF prog-id=224 op=LOAD Jan 23 17:33:28.288000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4716 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237353731636530323638313733333334633136396435623838346266 Jan 23 17:33:28.288000 audit: BPF prog-id=224 op=UNLOAD Jan 23 17:33:28.288000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237353731636530323638313733333334633136396435623838346266 Jan 23 17:33:28.288000 audit: BPF prog-id=223 op=UNLOAD Jan 23 17:33:28.288000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237353731636530323638313733333334633136396435623838346266 Jan 23 17:33:28.288000 audit: BPF prog-id=225 op=LOAD Jan 23 17:33:28.288000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4716 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237353731636530323638313733333334633136396435623838346266 Jan 23 17:33:28.293000 audit: BPF prog-id=226 op=LOAD Jan 23 17:33:28.293000 audit: BPF prog-id=227 op=LOAD Jan 23 17:33:28.293000 audit[4783]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4772 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326634626338313637323261396433373431623038323338636632 Jan 23 17:33:28.294000 audit: BPF prog-id=227 op=UNLOAD Jan 23 17:33:28.294000 audit[4783]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4772 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326634626338313637323261396433373431623038323338636632 Jan 23 17:33:28.294000 audit: BPF prog-id=228 op=LOAD Jan 23 17:33:28.294000 audit[4783]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4772 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326634626338313637323261396433373431623038323338636632 Jan 23 17:33:28.294000 audit: BPF prog-id=229 op=LOAD Jan 23 17:33:28.294000 audit[4783]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4772 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326634626338313637323261396433373431623038323338636632 Jan 23 17:33:28.294000 audit: BPF prog-id=229 op=UNLOAD Jan 23 17:33:28.294000 audit[4783]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4772 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326634626338313637323261396433373431623038323338636632 Jan 23 17:33:28.294000 audit: BPF prog-id=228 op=UNLOAD Jan 23 17:33:28.294000 audit[4783]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4772 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326634626338313637323261396433373431623038323338636632 Jan 23 17:33:28.294000 audit: BPF prog-id=230 op=LOAD Jan 23 17:33:28.294000 audit[4783]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4772 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326634626338313637323261396433373431623038323338636632 Jan 23 17:33:28.304823 systemd-networkd[1580]: cali35c0af9e92f: Link UP Jan 23 17:33:28.305027 systemd-networkd[1580]: cali35c0af9e92f: Gained carrier Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.021 [INFO][4581] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0 calico-apiserver-6686778f54- calico-apiserver 0cda543d-2a18-4d2d-aa42-b11c9c8288f8 825 0 2026-01-23 17:32:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6686778f54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-1-0-4-2c8b61c80e calico-apiserver-6686778f54-7m9rh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali35c0af9e92f [] [] }} ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-7m9rh" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.022 [INFO][4581] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-7m9rh" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.066 [INFO][4659] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" HandleID="k8s-pod-network.3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.066 [INFO][4659] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" HandleID="k8s-pod-network.3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001184a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-1-0-4-2c8b61c80e", "pod":"calico-apiserver-6686778f54-7m9rh", "timestamp":"2026-01-23 17:33:28.066335974 +0000 UTC"}, Hostname:"ci-4547-1-0-4-2c8b61c80e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.066 [INFO][4659] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.195 [INFO][4659] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.195 [INFO][4659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-4-2c8b61c80e' Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.264 [INFO][4659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.270 [INFO][4659] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.276 [INFO][4659] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.281 [INFO][4659] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.283 [INFO][4659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.283 [INFO][4659] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.285 [INFO][4659] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562 Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.290 [INFO][4659] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.298 [INFO][4659] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.197/26] block=192.168.123.192/26 handle="k8s-pod-network.3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.298 [INFO][4659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.197/26] handle="k8s-pod-network.3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.298 [INFO][4659] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:33:28.323651 containerd[1668]: 2026-01-23 17:33:28.298 [INFO][4659] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.197/26] IPv6=[] ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" HandleID="k8s-pod-network.3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" Jan 23 17:33:28.325368 containerd[1668]: 2026-01-23 17:33:28.301 [INFO][4581] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-7m9rh" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0", GenerateName:"calico-apiserver-6686778f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cda543d-2a18-4d2d-aa42-b11c9c8288f8", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6686778f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"", Pod:"calico-apiserver-6686778f54-7m9rh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35c0af9e92f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.325368 containerd[1668]: 2026-01-23 17:33:28.301 [INFO][4581] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.197/32] ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-7m9rh" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" Jan 23 17:33:28.325368 containerd[1668]: 2026-01-23 17:33:28.301 [INFO][4581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35c0af9e92f ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-7m9rh" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" Jan 23 17:33:28.325368 containerd[1668]: 2026-01-23 17:33:28.307 [INFO][4581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-7m9rh" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" Jan 23 17:33:28.325368 containerd[1668]: 2026-01-23 17:33:28.308 [INFO][4581] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-7m9rh" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0", GenerateName:"calico-apiserver-6686778f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cda543d-2a18-4d2d-aa42-b11c9c8288f8", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6686778f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562", Pod:"calico-apiserver-6686778f54-7m9rh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35c0af9e92f", MAC:"ca:d5:5d:14:f1:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.325368 containerd[1668]: 2026-01-23 17:33:28.319 [INFO][4581] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" Namespace="calico-apiserver" Pod="calico-apiserver-6686778f54-7m9rh" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--apiserver--6686778f54--7m9rh-eth0" Jan 23 17:33:28.325765 containerd[1668]: time="2026-01-23T17:33:28.325598726Z" level=info msg="StartContainer for \"27571ce0268173334c169d5b884bfd3f4e2ada7c4d093376b120ababded75d55\" returns successfully" Jan 23 17:33:28.343625 containerd[1668]: time="2026-01-23T17:33:28.343582214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hp496,Uid:49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded,Namespace:calico-system,Attempt:0,} returns sandbox id \"802f4bc816722a9d3741b08238cf26bc8067424e566bd76ee603f08d66171b1a\"" Jan 23 17:33:28.345000 audit[4846]: NETFILTER_CFG table=filter:130 family=2 entries=55 op=nft_register_chain pid=4846 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:28.346856 containerd[1668]: time="2026-01-23T17:33:28.346625429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:33:28.345000 audit[4846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28304 a0=3 a1=ffffe41d83f0 a2=0 a3=ffffba296fa8 items=0 ppid=4374 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.345000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:28.359150 containerd[1668]: time="2026-01-23T17:33:28.359041890Z" level=info msg="connecting to shim 3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562" address="unix:///run/containerd/s/a589f89b32bb7566b239dea125ddf5da96e4aa9d4cda86d27160b18c488dbc2d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:28.393103 systemd[1]: Started cri-containerd-3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562.scope - libcontainer container 3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562. Jan 23 17:33:28.408000 audit: BPF prog-id=231 op=LOAD Jan 23 17:33:28.409000 audit: BPF prog-id=232 op=LOAD Jan 23 17:33:28.409000 audit[4869]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4857 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313238343062343337343863363365626130636562343333383866 Jan 23 17:33:28.410000 audit: BPF prog-id=232 op=UNLOAD Jan 23 17:33:28.410000 audit[4869]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4857 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313238343062343337343863363365626130636562343333383866 Jan 23 17:33:28.411000 audit: BPF prog-id=233 op=LOAD Jan 23 17:33:28.411000 audit[4869]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4857 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313238343062343337343863363365626130636562343333383866 Jan 23 17:33:28.411000 audit: BPF prog-id=234 op=LOAD Jan 23 17:33:28.411000 audit[4869]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4857 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313238343062343337343863363365626130636562343333383866 Jan 23 17:33:28.411000 audit: BPF prog-id=234 op=UNLOAD Jan 23 17:33:28.411000 audit[4869]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4857 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313238343062343337343863363365626130636562343333383866 Jan 23 17:33:28.411000 audit: BPF prog-id=233 op=UNLOAD Jan 23 17:33:28.411000 audit[4869]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4857 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313238343062343337343863363365626130636562343333383866 Jan 23 17:33:28.411000 audit: BPF prog-id=235 op=LOAD Jan 23 17:33:28.411000 audit[4869]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4857 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313238343062343337343863363365626130636562343333383866 Jan 23 17:33:28.415502 systemd-networkd[1580]: califfa716522d1: Link UP Jan 23 17:33:28.417147 systemd-networkd[1580]: califfa716522d1: Gained carrier Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.065 [INFO][4629] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0 csi-node-driver- calico-system decbe37e-6413-4ebb-af5d-fd959613c007 720 0 2026-01-23 17:33:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-1-0-4-2c8b61c80e csi-node-driver-gfmmc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califfa716522d1 [] [] }} ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Namespace="calico-system" Pod="csi-node-driver-gfmmc" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.065 [INFO][4629] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Namespace="calico-system" Pod="csi-node-driver-gfmmc" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.089 [INFO][4680] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" HandleID="k8s-pod-network.4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.089 [INFO][4680] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" HandleID="k8s-pod-network.4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042a5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-1-0-4-2c8b61c80e", "pod":"csi-node-driver-gfmmc", "timestamp":"2026-01-23 17:33:28.089849489 +0000 UTC"}, Hostname:"ci-4547-1-0-4-2c8b61c80e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.090 [INFO][4680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.298 [INFO][4680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.298 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-4-2c8b61c80e' Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.366 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.375 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.383 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.385 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.389 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.389 [INFO][4680] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.391 [INFO][4680] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.400 [INFO][4680] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.409 [INFO][4680] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.198/26] block=192.168.123.192/26 handle="k8s-pod-network.4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.409 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.198/26] handle="k8s-pod-network.4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.409 [INFO][4680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:33:28.432850 containerd[1668]: 2026-01-23 17:33:28.409 [INFO][4680] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.198/26] IPv6=[] ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" HandleID="k8s-pod-network.4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" Jan 23 17:33:28.433382 containerd[1668]: 2026-01-23 17:33:28.411 [INFO][4629] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Namespace="calico-system" Pod="csi-node-driver-gfmmc" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"decbe37e-6413-4ebb-af5d-fd959613c007", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"", Pod:"csi-node-driver-gfmmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfa716522d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.433382 containerd[1668]: 2026-01-23 17:33:28.411 [INFO][4629] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.198/32] ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Namespace="calico-system" Pod="csi-node-driver-gfmmc" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" Jan 23 17:33:28.433382 containerd[1668]: 2026-01-23 17:33:28.411 [INFO][4629] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfa716522d1 ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Namespace="calico-system" Pod="csi-node-driver-gfmmc" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" Jan 23 17:33:28.433382 containerd[1668]: 2026-01-23 17:33:28.417 [INFO][4629] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Namespace="calico-system" Pod="csi-node-driver-gfmmc" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" Jan 23 17:33:28.433382 containerd[1668]: 2026-01-23 17:33:28.418 [INFO][4629] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Namespace="calico-system" Pod="csi-node-driver-gfmmc" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"decbe37e-6413-4ebb-af5d-fd959613c007", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f", Pod:"csi-node-driver-gfmmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfa716522d1", MAC:"f2:f5:26:d1:29:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.433382 containerd[1668]: 2026-01-23 17:33:28.429 [INFO][4629] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" Namespace="calico-system" Pod="csi-node-driver-gfmmc" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-csi--node--driver--gfmmc-eth0" Jan 23 17:33:28.449000 audit[4906]: NETFILTER_CFG table=filter:131 family=2 entries=48 op=nft_register_chain pid=4906 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:28.449000 audit[4906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=fffff35ffb70 a2=0 a3=ffff80ec5fa8 items=0 ppid=4374 pid=4906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.449000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:28.451917 containerd[1668]: time="2026-01-23T17:33:28.451876745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686778f54-7m9rh,Uid:0cda543d-2a18-4d2d-aa42-b11c9c8288f8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3c12840b43748c63eba0ceb43388f09ffa6deb13cab51f33eccf94b4be5a2562\"" Jan 23 17:33:28.461063 containerd[1668]: time="2026-01-23T17:33:28.460944590Z" level=info msg="connecting to shim 4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f" address="unix:///run/containerd/s/a252c38701254ef143a299604ecaee1fc663a470735a54cd43bcbadafe03acd4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:28.488977 systemd[1]: Started cri-containerd-4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f.scope - libcontainer container 4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f. Jan 23 17:33:28.500000 audit: BPF prog-id=236 op=LOAD Jan 23 17:33:28.501000 audit: BPF prog-id=237 op=LOAD Jan 23 17:33:28.501000 audit[4927]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633334633633646634646430343663333361306235373364636465 Jan 23 17:33:28.501000 audit: BPF prog-id=237 op=UNLOAD Jan 23 17:33:28.501000 audit[4927]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633334633633646634646430343663333361306235373364636465 Jan 23 17:33:28.501000 audit: BPF prog-id=238 op=LOAD Jan 23 17:33:28.501000 audit[4927]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633334633633646634646430343663333361306235373364636465 Jan 23 17:33:28.501000 audit: BPF prog-id=239 op=LOAD Jan 23 17:33:28.501000 audit[4927]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633334633633646634646430343663333361306235373364636465 Jan 23 17:33:28.501000 audit: BPF prog-id=239 op=UNLOAD Jan 23 17:33:28.501000 audit[4927]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633334633633646634646430343663333361306235373364636465 Jan 23 17:33:28.502000 audit: BPF prog-id=238 op=UNLOAD Jan 23 17:33:28.502000 audit[4927]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633334633633646634646430343663333361306235373364636465 Jan 23 17:33:28.502000 audit: BPF prog-id=240 op=LOAD Jan 23 17:33:28.502000 audit[4927]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633334633633646634646430343663333361306235373364636465 Jan 23 17:33:28.512091 systemd-networkd[1580]: cali9e4de6ca463: Link UP Jan 23 17:33:28.512700 systemd-networkd[1580]: cali9e4de6ca463: Gained carrier Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.065 [INFO][4624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0 coredns-674b8bbfcf- kube-system 96e04b17-1bf7-4d71-9569-39faa03d6952 824 0 2026-01-23 17:32:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-1-0-4-2c8b61c80e coredns-674b8bbfcf-qcpzs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e4de6ca463 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Namespace="kube-system" Pod="coredns-674b8bbfcf-qcpzs" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.066 [INFO][4624] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Namespace="kube-system" Pod="coredns-674b8bbfcf-qcpzs" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.102 [INFO][4682] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" HandleID="k8s-pod-network.c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.103 [INFO][4682] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" HandleID="k8s-pod-network.c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-1-0-4-2c8b61c80e", "pod":"coredns-674b8bbfcf-qcpzs", "timestamp":"2026-01-23 17:33:28.102654232 +0000 UTC"}, Hostname:"ci-4547-1-0-4-2c8b61c80e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.103 [INFO][4682] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.410 [INFO][4682] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.410 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-4-2c8b61c80e' Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.465 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.474 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.483 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.487 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.490 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.490 [INFO][4682] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.491 [INFO][4682] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59 Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.497 [INFO][4682] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.504 [INFO][4682] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.199/26] block=192.168.123.192/26 handle="k8s-pod-network.c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.505 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.199/26] handle="k8s-pod-network.c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.505 [INFO][4682] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:33:28.526803 containerd[1668]: 2026-01-23 17:33:28.505 [INFO][4682] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.199/26] IPv6=[] ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" HandleID="k8s-pod-network.c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" Jan 23 17:33:28.528420 containerd[1668]: 2026-01-23 17:33:28.508 [INFO][4624] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Namespace="kube-system" Pod="coredns-674b8bbfcf-qcpzs" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"96e04b17-1bf7-4d71-9569-39faa03d6952", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"", Pod:"coredns-674b8bbfcf-qcpzs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e4de6ca463", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.528420 containerd[1668]: 2026-01-23 17:33:28.508 [INFO][4624] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.199/32] ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Namespace="kube-system" Pod="coredns-674b8bbfcf-qcpzs" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" Jan 23 17:33:28.528420 containerd[1668]: 2026-01-23 17:33:28.509 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e4de6ca463 ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Namespace="kube-system" Pod="coredns-674b8bbfcf-qcpzs" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" Jan 23 17:33:28.528420 containerd[1668]: 2026-01-23 17:33:28.511 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Namespace="kube-system" Pod="coredns-674b8bbfcf-qcpzs" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" Jan 23 17:33:28.528420 containerd[1668]: 2026-01-23 17:33:28.511 [INFO][4624] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Namespace="kube-system" Pod="coredns-674b8bbfcf-qcpzs" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"96e04b17-1bf7-4d71-9569-39faa03d6952", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59", Pod:"coredns-674b8bbfcf-qcpzs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e4de6ca463", MAC:"4e:95:c5:dc:76:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:28.528862 containerd[1668]: 2026-01-23 17:33:28.523 [INFO][4624] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" Namespace="kube-system" Pod="coredns-674b8bbfcf-qcpzs" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-coredns--674b8bbfcf--qcpzs-eth0" Jan 23 17:33:28.528862 containerd[1668]: time="2026-01-23T17:33:28.528267560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfmmc,Uid:decbe37e-6413-4ebb-af5d-fd959613c007,Namespace:calico-system,Attempt:0,} returns sandbox id \"4dc34c63df4dd046c33a0b573dcdead247195933ccf121b4172ed0baff792e8f\"" Jan 23 17:33:28.540000 audit[4963]: NETFILTER_CFG table=filter:132 family=2 entries=54 op=nft_register_chain pid=4963 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:28.540000 audit[4963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25556 a0=3 a1=ffffc6d23010 a2=0 a3=ffffa0d64fa8 items=0 ppid=4374 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.540000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:28.549267 containerd[1668]: time="2026-01-23T17:33:28.549181423Z" level=info msg="connecting to shim c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59" address="unix:///run/containerd/s/87443b3dfca241db0bede3d43de057ac94b82374ec2c2c5645ac50156401f333" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:28.572951 systemd[1]: Started cri-containerd-c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59.scope - libcontainer container c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59. Jan 23 17:33:28.582000 audit: BPF prog-id=241 op=LOAD Jan 23 17:33:28.583000 audit: BPF prog-id=242 op=LOAD Jan 23 17:33:28.583000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4972 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653061616362636662646436336136616233656165633862313734 Jan 23 17:33:28.583000 audit: BPF prog-id=242 op=UNLOAD Jan 23 17:33:28.583000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653061616362636662646436336136616233656165633862313734 Jan 23 17:33:28.583000 audit: BPF prog-id=243 op=LOAD Jan 23 17:33:28.583000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4972 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653061616362636662646436336136616233656165633862313734 Jan 23 17:33:28.583000 audit: BPF prog-id=244 op=LOAD Jan 23 17:33:28.583000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4972 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653061616362636662646436336136616233656165633862313734 Jan 23 17:33:28.583000 audit: BPF prog-id=244 op=UNLOAD Jan 23 17:33:28.583000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653061616362636662646436336136616233656165633862313734 Jan 23 17:33:28.583000 audit: BPF prog-id=243 op=UNLOAD Jan 23 17:33:28.583000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653061616362636662646436336136616233656165633862313734 Jan 23 17:33:28.583000 audit: BPF prog-id=245 op=LOAD Jan 23 17:33:28.583000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4972 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653061616362636662646436336136616233656165633862313734 Jan 23 17:33:28.606294 containerd[1668]: time="2026-01-23T17:33:28.606239422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qcpzs,Uid:96e04b17-1bf7-4d71-9569-39faa03d6952,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59\"" Jan 23 17:33:28.612223 containerd[1668]: time="2026-01-23T17:33:28.612179052Z" level=info msg="CreateContainer within sandbox \"c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:33:28.620689 containerd[1668]: time="2026-01-23T17:33:28.620647413Z" level=info msg="Container 4989696107a43afc4b024a383107311470d4a685683b5a5331e65aab3ef332b5: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:33:28.626033 containerd[1668]: time="2026-01-23T17:33:28.625996559Z" level=info msg="CreateContainer within sandbox \"c3e0aacbcfbdd63a6ab3eaec8b1740488f3ffe21011f8fe9ffcad5bea7a9bd59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4989696107a43afc4b024a383107311470d4a685683b5a5331e65aab3ef332b5\"" Jan 23 17:33:28.626803 containerd[1668]: time="2026-01-23T17:33:28.626776363Z" level=info msg="StartContainer for \"4989696107a43afc4b024a383107311470d4a685683b5a5331e65aab3ef332b5\"" Jan 23 17:33:28.627711 containerd[1668]: time="2026-01-23T17:33:28.627629367Z" level=info msg="connecting to shim 4989696107a43afc4b024a383107311470d4a685683b5a5331e65aab3ef332b5" address="unix:///run/containerd/s/87443b3dfca241db0bede3d43de057ac94b82374ec2c2c5645ac50156401f333" protocol=ttrpc version=3 Jan 23 17:33:28.648964 systemd[1]: Started cri-containerd-4989696107a43afc4b024a383107311470d4a685683b5a5331e65aab3ef332b5.scope - libcontainer container 4989696107a43afc4b024a383107311470d4a685683b5a5331e65aab3ef332b5. Jan 23 17:33:28.663000 audit: BPF prog-id=246 op=LOAD Jan 23 17:33:28.664000 audit: BPF prog-id=247 op=LOAD Jan 23 17:33:28.664000 audit[5010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4972 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383936393631303761343361666334623032346133383331303733 Jan 23 17:33:28.664000 audit: BPF prog-id=247 op=UNLOAD Jan 23 17:33:28.664000 audit[5010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383936393631303761343361666334623032346133383331303733 Jan 23 17:33:28.664000 audit: BPF prog-id=248 op=LOAD Jan 23 17:33:28.664000 audit[5010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4972 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383936393631303761343361666334623032346133383331303733 Jan 23 17:33:28.664000 audit: BPF prog-id=249 op=LOAD Jan 23 17:33:28.664000 audit[5010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4972 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383936393631303761343361666334623032346133383331303733 Jan 23 17:33:28.664000 audit: BPF prog-id=249 op=UNLOAD Jan 23 17:33:28.664000 audit[5010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383936393631303761343361666334623032346133383331303733 Jan 23 17:33:28.664000 audit: BPF prog-id=248 op=UNLOAD Jan 23 17:33:28.664000 audit[5010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383936393631303761343361666334623032346133383331303733 Jan 23 17:33:28.664000 audit: BPF prog-id=250 op=LOAD Jan 23 17:33:28.664000 audit[5010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4972 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:28.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439383936393631303761343361666334623032346133383331303733 Jan 23 17:33:28.685666 containerd[1668]: time="2026-01-23T17:33:28.685619652Z" level=info msg="StartContainer for \"4989696107a43afc4b024a383107311470d4a685683b5a5331e65aab3ef332b5\" returns successfully" Jan 23 17:33:28.688493 containerd[1668]: time="2026-01-23T17:33:28.688388945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:28.690361 containerd[1668]: time="2026-01-23T17:33:28.690316955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:33:28.690432 containerd[1668]: time="2026-01-23T17:33:28.690402915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:28.690572 kubelet[2951]: E0123 17:33:28.690534 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:33:28.690616 kubelet[2951]: E0123 17:33:28.690577 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:33:28.691782 kubelet[2951]: E0123 17:33:28.691304 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88zgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hp496_calico-system(49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:28.691923 containerd[1668]: time="2026-01-23T17:33:28.691771842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:33:28.693605 kubelet[2951]: E0123 17:33:28.693561 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:33:29.019966 containerd[1668]: time="2026-01-23T17:33:29.019924972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:29.021223 containerd[1668]: time="2026-01-23T17:33:29.021147978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:33:29.021390 containerd[1668]: time="2026-01-23T17:33:29.021234938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:29.021501 kubelet[2951]: E0123 17:33:29.021469 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:29.021586 kubelet[2951]: E0123 17:33:29.021571 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:29.022048 containerd[1668]: time="2026-01-23T17:33:29.022000982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:33:29.022565 kubelet[2951]: E0123 17:33:29.022233 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqd9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6686778f54-7m9rh_calico-apiserver(0cda543d-2a18-4d2d-aa42-b11c9c8288f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:29.023420 kubelet[2951]: E0123 17:33:29.023375 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:33:29.097723 kubelet[2951]: E0123 17:33:29.097664 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:33:29.106918 kubelet[2951]: E0123 17:33:29.106876 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:33:29.120000 audit[5051]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:29.120000 audit[5051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcf03b190 a2=0 a3=1 items=0 ppid=3062 pid=5051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:29.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:29.124795 kubelet[2951]: I0123 17:33:29.123588 2951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fwt87" podStartSLOduration=46.12357032 podStartE2EDuration="46.12357032s" podCreationTimestamp="2026-01-23 17:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:33:29.110354535 +0000 UTC m=+50.235660069" watchObservedRunningTime="2026-01-23 17:33:29.12357032 +0000 UTC m=+50.248875814" Jan 23 17:33:29.129000 audit[5051]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:29.129000 audit[5051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcf03b190 a2=0 a3=1 items=0 ppid=3062 pid=5051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:29.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:29.147797 kubelet[2951]: I0123 17:33:29.147256 2951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qcpzs" podStartSLOduration=46.147237556 podStartE2EDuration="46.147237556s" podCreationTimestamp="2026-01-23 17:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:33:29.136307143 +0000 UTC m=+50.261612677" watchObservedRunningTime="2026-01-23 17:33:29.147237556 +0000 UTC m=+50.272543090" Jan 23 17:33:29.157000 audit[5053]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:29.157000 audit[5053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff58d6aa0 a2=0 a3=1 items=0 ppid=3062 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:29.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:29.169000 audit[5053]: NETFILTER_CFG table=nat:136 family=2 entries=47 op=nft_register_chain pid=5053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:29.169000 audit[5053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff58d6aa0 a2=0 a3=1 items=0 ppid=3062 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:29.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:29.318185 systemd-networkd[1580]: cali35c0af9e92f: Gained IPv6LL Jan 23 17:33:29.358712 containerd[1668]: time="2026-01-23T17:33:29.358655953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:29.360102 containerd[1668]: time="2026-01-23T17:33:29.360051800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:33:29.360190 containerd[1668]: time="2026-01-23T17:33:29.360142361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:29.360427 kubelet[2951]: E0123 17:33:29.360351 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:33:29.360427 kubelet[2951]: E0123 17:33:29.360416 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:33:29.360597 kubelet[2951]: E0123 17:33:29.360549 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:29.364348 containerd[1668]: time="2026-01-23T17:33:29.364254741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:33:29.693968 containerd[1668]: time="2026-01-23T17:33:29.693659237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:29.695476 containerd[1668]: time="2026-01-23T17:33:29.695430406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:33:29.695556 containerd[1668]: time="2026-01-23T17:33:29.695453046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:29.695740 kubelet[2951]: E0123 17:33:29.695668 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:33:29.695791 kubelet[2951]: E0123 17:33:29.695743 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:33:29.696258 kubelet[2951]: E0123 17:33:29.695879 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:29.697335 kubelet[2951]: E0123 17:33:29.697299 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:29.893935 systemd-networkd[1580]: cali3b5d6efc025: Gained IPv6LL Jan 23 17:33:29.949007 containerd[1668]: time="2026-01-23T17:33:29.948885809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f66fb8c4-942js,Uid:e9193ceb-0e99-470a-bb0b-413b40f3616d,Namespace:calico-system,Attempt:0,}" Jan 23 17:33:30.022900 systemd-networkd[1580]: cali752f14fe5f7: Gained IPv6LL Jan 23 17:33:30.067640 systemd-networkd[1580]: calif5a75dc2d53: Link UP Jan 23 17:33:30.068000 systemd-networkd[1580]: calif5a75dc2d53: Gained carrier Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:29.992 [INFO][5056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0 calico-kube-controllers-5f66fb8c4- calico-system e9193ceb-0e99-470a-bb0b-413b40f3616d 826 0 2026-01-23 17:33:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f66fb8c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-1-0-4-2c8b61c80e calico-kube-controllers-5f66fb8c4-942js eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif5a75dc2d53 [] [] }} ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Namespace="calico-system" Pod="calico-kube-controllers-5f66fb8c4-942js" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:29.992 [INFO][5056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Namespace="calico-system" Pod="calico-kube-controllers-5f66fb8c4-942js" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.014 [INFO][5070] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" HandleID="k8s-pod-network.530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.014 [INFO][5070] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" HandleID="k8s-pod-network.530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b07c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-1-0-4-2c8b61c80e", "pod":"calico-kube-controllers-5f66fb8c4-942js", "timestamp":"2026-01-23 17:33:30.014224729 +0000 UTC"}, Hostname:"ci-4547-1-0-4-2c8b61c80e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.014 [INFO][5070] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.014 [INFO][5070] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.014 [INFO][5070] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-4-2c8b61c80e' Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.025 [INFO][5070] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.040 [INFO][5070] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.045 [INFO][5070] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.047 [INFO][5070] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.049 [INFO][5070] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.049 [INFO][5070] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.051 [INFO][5070] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.055 [INFO][5070] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.062 [INFO][5070] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.200/26] block=192.168.123.192/26 handle="k8s-pod-network.530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.062 [INFO][5070] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.200/26] handle="k8s-pod-network.530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" host="ci-4547-1-0-4-2c8b61c80e" Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.063 [INFO][5070] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:33:30.080377 containerd[1668]: 2026-01-23 17:33:30.063 [INFO][5070] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.200/26] IPv6=[] ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" HandleID="k8s-pod-network.530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Workload="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" Jan 23 17:33:30.081114 containerd[1668]: 2026-01-23 17:33:30.065 [INFO][5056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Namespace="calico-system" Pod="calico-kube-controllers-5f66fb8c4-942js" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0", GenerateName:"calico-kube-controllers-5f66fb8c4-", Namespace:"calico-system", SelfLink:"", UID:"e9193ceb-0e99-470a-bb0b-413b40f3616d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f66fb8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"", Pod:"calico-kube-controllers-5f66fb8c4-942js", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5a75dc2d53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:30.081114 containerd[1668]: 2026-01-23 17:33:30.065 [INFO][5056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.200/32] ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Namespace="calico-system" Pod="calico-kube-controllers-5f66fb8c4-942js" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" Jan 23 17:33:30.081114 containerd[1668]: 2026-01-23 17:33:30.065 [INFO][5056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5a75dc2d53 ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Namespace="calico-system" Pod="calico-kube-controllers-5f66fb8c4-942js" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" Jan 23 17:33:30.081114 containerd[1668]: 2026-01-23 17:33:30.067 [INFO][5056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Namespace="calico-system" Pod="calico-kube-controllers-5f66fb8c4-942js" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" Jan 23 17:33:30.081114 containerd[1668]: 2026-01-23 17:33:30.068 [INFO][5056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Namespace="calico-system" Pod="calico-kube-controllers-5f66fb8c4-942js" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0", GenerateName:"calico-kube-controllers-5f66fb8c4-", Namespace:"calico-system", SelfLink:"", UID:"e9193ceb-0e99-470a-bb0b-413b40f3616d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f66fb8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-4-2c8b61c80e", ContainerID:"530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf", Pod:"calico-kube-controllers-5f66fb8c4-942js", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5a75dc2d53", MAC:"42:f1:0e:63:6c:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:33:30.081114 containerd[1668]: 2026-01-23 17:33:30.078 [INFO][5056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" Namespace="calico-system" Pod="calico-kube-controllers-5f66fb8c4-942js" WorkloadEndpoint="ci--4547--1--0--4--2c8b61c80e-k8s-calico--kube--controllers--5f66fb8c4--942js-eth0" Jan 23 17:33:30.086895 systemd-networkd[1580]: califfa716522d1: Gained IPv6LL Jan 23 17:33:30.095000 audit[5087]: NETFILTER_CFG table=filter:137 family=2 entries=58 op=nft_register_chain pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:33:30.095000 audit[5087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27148 a0=3 a1=ffffccc27980 a2=0 a3=ffffaa520fa8 items=0 ppid=4374 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.095000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:33:30.109448 containerd[1668]: time="2026-01-23T17:33:30.108957114Z" level=info msg="connecting to shim 530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf" address="unix:///run/containerd/s/50124099557d82821cc67eedb5b8ef39a5dfa6ef9fe7a5a5e2c3cf95f7681fd8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:33:30.109562 kubelet[2951]: E0123 17:33:30.109520 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:33:30.109946 kubelet[2951]: E0123 17:33:30.109876 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:33:30.113139 kubelet[2951]: E0123 17:33:30.112029 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:30.155981 systemd[1]: Started cri-containerd-530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf.scope - libcontainer container 530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf. Jan 23 17:33:30.165000 audit: BPF prog-id=251 op=LOAD Jan 23 17:33:30.165000 audit: BPF prog-id=252 op=LOAD Jan 23 17:33:30.165000 audit[5109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5097 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533306236356535396465396565366133346636393730353735663431 Jan 23 17:33:30.165000 audit: BPF prog-id=252 op=UNLOAD Jan 23 17:33:30.165000 audit[5109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5097 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533306236356535396465396565366133346636393730353735663431 Jan 23 17:33:30.165000 audit: BPF prog-id=253 op=LOAD Jan 23 17:33:30.165000 audit[5109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5097 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533306236356535396465396565366133346636393730353735663431 Jan 23 17:33:30.166000 audit: BPF prog-id=254 op=LOAD Jan 23 17:33:30.166000 audit[5109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5097 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533306236356535396465396565366133346636393730353735663431 Jan 23 17:33:30.166000 audit: BPF prog-id=254 op=UNLOAD Jan 23 17:33:30.166000 audit[5109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5097 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533306236356535396465396565366133346636393730353735663431 Jan 23 17:33:30.166000 audit: BPF prog-id=253 op=UNLOAD Jan 23 17:33:30.166000 audit[5109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5097 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533306236356535396465396565366133346636393730353735663431 Jan 23 17:33:30.166000 audit: BPF prog-id=255 op=LOAD Jan 23 17:33:30.166000 audit[5109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5097 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533306236356535396465396565366133346636393730353735663431 Jan 23 17:33:30.190916 containerd[1668]: time="2026-01-23T17:33:30.190853836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f66fb8c4-942js,Uid:e9193ceb-0e99-470a-bb0b-413b40f3616d,Namespace:calico-system,Attempt:0,} returns sandbox id \"530b65e59de9ee6a34f6970575f415aadd641671a2b9d8cee4f7fa3173c0cfbf\"" Jan 23 17:33:30.192873 containerd[1668]: time="2026-01-23T17:33:30.192846406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:33:30.193000 audit[5137]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:30.193000 audit[5137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe89503a0 a2=0 a3=1 items=0 ppid=3062 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:30.201000 audit[5137]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:33:30.201000 audit[5137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe89503a0 a2=0 a3=1 items=0 ppid=3062 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:33:30.201000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:33:30.341961 systemd-networkd[1580]: cali9e4de6ca463: Gained IPv6LL Jan 23 17:33:30.539223 containerd[1668]: time="2026-01-23T17:33:30.539095064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:30.541093 containerd[1668]: time="2026-01-23T17:33:30.541034314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:33:30.541169 containerd[1668]: time="2026-01-23T17:33:30.541107474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:30.541344 kubelet[2951]: E0123 17:33:30.541297 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:33:30.541392 kubelet[2951]: E0123 17:33:30.541348 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:33:30.541542 kubelet[2951]: E0123 17:33:30.541479 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f28f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f66fb8c4-942js_calico-system(e9193ceb-0e99-470a-bb0b-413b40f3616d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:30.542834 kubelet[2951]: E0123 17:33:30.542783 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:33:31.112628 kubelet[2951]: E0123 17:33:31.112496 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:33:31.749950 systemd-networkd[1580]: calif5a75dc2d53: Gained IPv6LL Jan 23 17:33:32.115076 kubelet[2951]: E0123 17:33:32.114949 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:33:36.949625 containerd[1668]: time="2026-01-23T17:33:36.949546791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:33:37.274811 containerd[1668]: time="2026-01-23T17:33:37.274588225Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:37.275972 containerd[1668]: time="2026-01-23T17:33:37.275880912Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:33:37.276039 containerd[1668]: time="2026-01-23T17:33:37.275945712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:37.276236 kubelet[2951]: E0123 17:33:37.276184 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:33:37.276501 kubelet[2951]: E0123 17:33:37.276242 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:33:37.276501 kubelet[2951]: E0123 17:33:37.276360 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:acf1b8a947d84100b9ec5aa90fd31b84,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67qtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589548c48f-dtbrr_calico-system(de63b504-5477-4705-aea8-65396b064e08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:37.278851 containerd[1668]: time="2026-01-23T17:33:37.278828046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:33:37.621240 containerd[1668]: time="2026-01-23T17:33:37.620884604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:37.622246 containerd[1668]: time="2026-01-23T17:33:37.622208291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:33:37.622404 containerd[1668]: time="2026-01-23T17:33:37.622253771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:37.622547 kubelet[2951]: E0123 17:33:37.622511 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:33:37.622621 kubelet[2951]: E0123 17:33:37.622561 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:33:37.622709 kubelet[2951]: E0123 17:33:37.622668 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67qtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589548c48f-dtbrr_calico-system(de63b504-5477-4705-aea8-65396b064e08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:37.623916 kubelet[2951]: E0123 17:33:37.623839 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:33:39.949301 containerd[1668]: time="2026-01-23T17:33:39.949068665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:33:40.299523 containerd[1668]: time="2026-01-23T17:33:40.299445384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:40.300769 containerd[1668]: time="2026-01-23T17:33:40.300687310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:33:40.300980 kubelet[2951]: E0123 17:33:40.300943 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:40.301575 containerd[1668]: time="2026-01-23T17:33:40.300735830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:40.301625 kubelet[2951]: E0123 17:33:40.301338 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:40.301625 kubelet[2951]: E0123 17:33:40.301530 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btn72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6686778f54-qdppv_calico-apiserver(287c72fe-ff92-46ec-9e19-273465100dda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:40.302707 kubelet[2951]: E0123 17:33:40.302644 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:33:40.948863 containerd[1668]: time="2026-01-23T17:33:40.948668329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:33:41.291112 containerd[1668]: time="2026-01-23T17:33:41.291014528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:41.294358 containerd[1668]: time="2026-01-23T17:33:41.294316344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:33:41.294547 containerd[1668]: time="2026-01-23T17:33:41.294358545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:41.294604 kubelet[2951]: E0123 17:33:41.294521 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:41.294604 kubelet[2951]: E0123 17:33:41.294567 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:41.294782 kubelet[2951]: E0123 17:33:41.294706 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqd9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6686778f54-7m9rh_calico-apiserver(0cda543d-2a18-4d2d-aa42-b11c9c8288f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:41.295934 kubelet[2951]: E0123 17:33:41.295889 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:33:42.950553 containerd[1668]: time="2026-01-23T17:33:42.950486349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:33:43.284631 containerd[1668]: time="2026-01-23T17:33:43.284580108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:43.286098 containerd[1668]: time="2026-01-23T17:33:43.286058835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:33:43.286215 containerd[1668]: time="2026-01-23T17:33:43.286088355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:43.286347 kubelet[2951]: E0123 17:33:43.286290 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:33:43.286347 kubelet[2951]: E0123 17:33:43.286342 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:33:43.286630 kubelet[2951]: E0123 17:33:43.286460 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:43.288381 containerd[1668]: time="2026-01-23T17:33:43.288348766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:33:43.641857 containerd[1668]: time="2026-01-23T17:33:43.641727300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:43.642954 containerd[1668]: time="2026-01-23T17:33:43.642882265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:33:43.643038 containerd[1668]: time="2026-01-23T17:33:43.642952306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:43.643110 kubelet[2951]: E0123 17:33:43.643070 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:33:43.643156 kubelet[2951]: E0123 17:33:43.643120 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:33:43.643300 kubelet[2951]: E0123 17:33:43.643235 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:43.644570 kubelet[2951]: E0123 17:33:43.644520 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:44.951156 containerd[1668]: time="2026-01-23T17:33:44.950464960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:33:45.288926 containerd[1668]: time="2026-01-23T17:33:45.288872060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:45.290899 containerd[1668]: time="2026-01-23T17:33:45.290862990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:33:45.290992 containerd[1668]: time="2026-01-23T17:33:45.290949030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:45.291185 kubelet[2951]: E0123 17:33:45.291143 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:33:45.291488 kubelet[2951]: E0123 17:33:45.291196 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:33:45.291488 kubelet[2951]: E0123 17:33:45.291330 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88zgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hp496_calico-system(49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:45.292542 kubelet[2951]: E0123 17:33:45.292491 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:33:46.948590 containerd[1668]: time="2026-01-23T17:33:46.948541881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:33:47.288301 containerd[1668]: time="2026-01-23T17:33:47.288077067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:47.289364 containerd[1668]: time="2026-01-23T17:33:47.289254113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:33:47.289364 containerd[1668]: time="2026-01-23T17:33:47.289327753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:47.289525 kubelet[2951]: E0123 17:33:47.289468 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:33:47.289826 kubelet[2951]: E0123 17:33:47.289537 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:33:47.289826 kubelet[2951]: E0123 17:33:47.289660 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f28f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f66fb8c4-942js_calico-system(e9193ceb-0e99-470a-bb0b-413b40f3616d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:47.290870 kubelet[2951]: E0123 17:33:47.290840 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:33:48.950704 kubelet[2951]: E0123 17:33:48.950623 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:33:52.951747 kubelet[2951]: E0123 17:33:52.951036 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:33:53.949404 kubelet[2951]: E0123 17:33:53.949277 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:33:54.951143 kubelet[2951]: E0123 17:33:54.950284 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:33:55.950082 kubelet[2951]: E0123 17:33:55.950022 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:34:00.950339 containerd[1668]: time="2026-01-23T17:34:00.950140847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:34:01.313653 containerd[1668]: time="2026-01-23T17:34:01.313569190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:01.314960 containerd[1668]: time="2026-01-23T17:34:01.314919796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:34:01.315054 containerd[1668]: time="2026-01-23T17:34:01.315005477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:01.315162 kubelet[2951]: E0123 17:34:01.315126 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:34:01.315434 kubelet[2951]: E0123 17:34:01.315184 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:34:01.315761 kubelet[2951]: E0123 17:34:01.315676 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:acf1b8a947d84100b9ec5aa90fd31b84,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67qtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589548c48f-dtbrr_calico-system(de63b504-5477-4705-aea8-65396b064e08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:01.317842 containerd[1668]: time="2026-01-23T17:34:01.317804130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:34:01.844632 containerd[1668]: time="2026-01-23T17:34:01.844188473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:01.845793 containerd[1668]: time="2026-01-23T17:34:01.845732320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:34:01.846016 containerd[1668]: time="2026-01-23T17:34:01.845803320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:01.846257 kubelet[2951]: E0123 17:34:01.846205 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:34:01.846325 kubelet[2951]: E0123 17:34:01.846274 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:34:01.846562 kubelet[2951]: E0123 17:34:01.846497 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67qtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589548c48f-dtbrr_calico-system(de63b504-5477-4705-aea8-65396b064e08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:01.847861 kubelet[2951]: E0123 17:34:01.847808 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:34:01.949600 kubelet[2951]: E0123 17:34:01.949518 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:34:04.949770 containerd[1668]: time="2026-01-23T17:34:04.949620626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:34:05.313795 containerd[1668]: time="2026-01-23T17:34:05.312362606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:05.315623 containerd[1668]: time="2026-01-23T17:34:05.315521221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:34:05.316986 containerd[1668]: time="2026-01-23T17:34:05.316795268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:05.317942 kubelet[2951]: E0123 17:34:05.317896 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:05.318237 kubelet[2951]: E0123 17:34:05.317952 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:05.318237 kubelet[2951]: E0123 17:34:05.318073 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btn72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6686778f54-qdppv_calico-apiserver(287c72fe-ff92-46ec-9e19-273465100dda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:05.319306 kubelet[2951]: E0123 17:34:05.319265 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:34:05.948938 containerd[1668]: time="2026-01-23T17:34:05.948844848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:34:06.297558 containerd[1668]: time="2026-01-23T17:34:06.297435598Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:06.300437 containerd[1668]: time="2026-01-23T17:34:06.300385653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:34:06.300515 containerd[1668]: time="2026-01-23T17:34:06.300444333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:06.300725 kubelet[2951]: E0123 17:34:06.300679 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:34:06.300799 kubelet[2951]: E0123 17:34:06.300729 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:34:06.301302 kubelet[2951]: E0123 17:34:06.300864 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:06.303592 containerd[1668]: time="2026-01-23T17:34:06.303280227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:34:06.626714 containerd[1668]: time="2026-01-23T17:34:06.626500932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:06.629294 containerd[1668]: time="2026-01-23T17:34:06.629247546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:34:06.629372 containerd[1668]: time="2026-01-23T17:34:06.629343506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:06.629525 kubelet[2951]: E0123 17:34:06.629492 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:34:06.629810 kubelet[2951]: E0123 17:34:06.629540 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:34:06.629810 kubelet[2951]: E0123 17:34:06.629671 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:06.630862 kubelet[2951]: E0123 17:34:06.630818 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:34:07.949325 containerd[1668]: time="2026-01-23T17:34:07.949268381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:34:08.296963 containerd[1668]: time="2026-01-23T17:34:08.296900927Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:08.298336 containerd[1668]: time="2026-01-23T17:34:08.298279013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:34:08.298420 containerd[1668]: time="2026-01-23T17:34:08.298367454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:08.298542 kubelet[2951]: E0123 17:34:08.298505 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:34:08.298842 kubelet[2951]: E0123 17:34:08.298553 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:34:08.298842 kubelet[2951]: E0123 17:34:08.298681 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88zgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hp496_calico-system(49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:08.299862 kubelet[2951]: E0123 17:34:08.299819 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:34:08.949915 containerd[1668]: time="2026-01-23T17:34:08.949590488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:34:09.273947 containerd[1668]: time="2026-01-23T17:34:09.273895559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:09.275363 containerd[1668]: time="2026-01-23T17:34:09.275332206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:34:09.275502 containerd[1668]: time="2026-01-23T17:34:09.275362966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:09.275572 kubelet[2951]: E0123 17:34:09.275531 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:09.275620 kubelet[2951]: E0123 17:34:09.275579 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:09.275748 kubelet[2951]: E0123 17:34:09.275710 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqd9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6686778f54-7m9rh_calico-apiserver(0cda543d-2a18-4d2d-aa42-b11c9c8288f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:09.276999 kubelet[2951]: E0123 17:34:09.276958 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:34:15.949076 containerd[1668]: time="2026-01-23T17:34:15.949032904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:34:15.950002 kubelet[2951]: E0123 17:34:15.949841 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:34:16.291990 containerd[1668]: time="2026-01-23T17:34:16.291891466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:16.293746 containerd[1668]: time="2026-01-23T17:34:16.293696195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:34:16.293853 containerd[1668]: time="2026-01-23T17:34:16.293793796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:16.294012 kubelet[2951]: E0123 17:34:16.293955 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:34:16.294012 kubelet[2951]: E0123 17:34:16.294009 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:34:16.294516 kubelet[2951]: E0123 17:34:16.294468 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f28f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f66fb8c4-942js_calico-system(e9193ceb-0e99-470a-bb0b-413b40f3616d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:16.295837 kubelet[2951]: E0123 17:34:16.295799 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:34:17.949326 kubelet[2951]: E0123 17:34:17.949273 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:34:19.950388 kubelet[2951]: E0123 17:34:19.949944 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:34:19.950388 kubelet[2951]: E0123 17:34:19.950017 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:34:19.951511 kubelet[2951]: E0123 17:34:19.951102 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:34:27.949226 kubelet[2951]: E0123 17:34:27.949114 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:34:27.951012 kubelet[2951]: E0123 17:34:27.949961 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:34:30.950721 kubelet[2951]: E0123 17:34:30.950650 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:34:30.951879 kubelet[2951]: E0123 17:34:30.951846 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:34:30.952403 kubelet[2951]: E0123 17:34:30.952371 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:34:31.949395 kubelet[2951]: E0123 17:34:31.949323 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:34:38.949446 kubelet[2951]: E0123 17:34:38.949381 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:34:39.950676 kubelet[2951]: E0123 17:34:39.950612 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:34:41.949325 kubelet[2951]: E0123 17:34:41.949203 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:34:41.949687 kubelet[2951]: E0123 17:34:41.949524 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:34:42.949245 kubelet[2951]: E0123 17:34:42.949151 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:34:45.948576 kubelet[2951]: E0123 17:34:45.948471 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:34:52.950067 kubelet[2951]: E0123 17:34:52.949955 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:34:53.452321 systemd[1]: Started sshd@11-10.0.6.147:22-4.153.228.146:43540.service - OpenSSH per-connection server daemon (4.153.228.146:43540). Jan 23 17:34:53.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.6.147:22-4.153.228.146:43540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:53.453195 kernel: kauditd_printk_skb: 233 callbacks suppressed Jan 23 17:34:53.453291 kernel: audit: type=1130 audit(1769189693.451:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.6.147:22-4.153.228.146:43540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:53.992000 audit[5267]: USER_ACCT pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:53.995279 sshd[5267]: Accepted publickey for core from 4.153.228.146 port 43540 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:34:53.996073 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:53.994000 audit[5267]: CRED_ACQ pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.001156 kernel: audit: type=1101 audit(1769189693.992:743): pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.001228 kernel: audit: type=1103 audit(1769189693.994:744): pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.001255 kernel: audit: type=1006 audit(1769189693.994:745): pid=5267 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 23 17:34:53.994000 audit[5267]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8448c50 a2=3 a3=0 items=0 ppid=1 pid=5267 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:54.006585 kernel: audit: type=1300 audit(1769189693.994:745): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8448c50 a2=3 a3=0 items=0 ppid=1 pid=5267 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:53.994000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:54.008285 kernel: audit: type=1327 audit(1769189693.994:745): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:54.009694 systemd-logind[1643]: New session 13 of user core. Jan 23 17:34:54.023076 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 17:34:54.025000 audit[5267]: USER_START pid=5267 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.029000 audit[5271]: CRED_ACQ pid=5271 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.033497 kernel: audit: type=1105 audit(1769189694.025:746): pid=5267 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.033545 kernel: audit: type=1103 audit(1769189694.029:747): pid=5271 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.373063 sshd[5271]: Connection closed by 4.153.228.146 port 43540 Jan 23 17:34:54.373535 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:54.374000 audit[5267]: USER_END pid=5267 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.374000 audit[5267]: CRED_DISP pid=5267 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.381150 systemd[1]: sshd@11-10.0.6.147:22-4.153.228.146:43540.service: Deactivated successfully. Jan 23 17:34:54.382800 kernel: audit: type=1106 audit(1769189694.374:748): pid=5267 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.382882 kernel: audit: type=1104 audit(1769189694.374:749): pid=5267 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:34:54.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.6.147:22-4.153.228.146:43540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:54.385566 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 17:34:54.386969 systemd-logind[1643]: Session 13 logged out. Waiting for processes to exit. Jan 23 17:34:54.389034 systemd-logind[1643]: Removed session 13. Jan 23 17:34:54.949846 containerd[1668]: time="2026-01-23T17:34:54.949088097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:34:55.297240 containerd[1668]: time="2026-01-23T17:34:55.297182206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:55.298794 containerd[1668]: time="2026-01-23T17:34:55.298728933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:34:55.298876 containerd[1668]: time="2026-01-23T17:34:55.298812134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:55.298998 kubelet[2951]: E0123 17:34:55.298959 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:34:55.299250 kubelet[2951]: E0123 17:34:55.299016 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:34:55.299285 kubelet[2951]: E0123 17:34:55.299225 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:acf1b8a947d84100b9ec5aa90fd31b84,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67qtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589548c48f-dtbrr_calico-system(de63b504-5477-4705-aea8-65396b064e08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:55.299495 containerd[1668]: time="2026-01-23T17:34:55.299470297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:34:55.631597 containerd[1668]: time="2026-01-23T17:34:55.631471727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:55.632977 containerd[1668]: time="2026-01-23T17:34:55.632916574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:34:55.633064 containerd[1668]: time="2026-01-23T17:34:55.632956694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:55.633187 kubelet[2951]: E0123 17:34:55.633128 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:34:55.633233 kubelet[2951]: E0123 17:34:55.633190 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:34:55.633481 kubelet[2951]: E0123 17:34:55.633404 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:55.633635 containerd[1668]: time="2026-01-23T17:34:55.633613817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:34:56.048131 containerd[1668]: time="2026-01-23T17:34:56.048042611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:56.049616 containerd[1668]: time="2026-01-23T17:34:56.049563139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:34:56.049692 containerd[1668]: time="2026-01-23T17:34:56.049642019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:56.049853 kubelet[2951]: E0123 17:34:56.049813 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:56.049900 kubelet[2951]: E0123 17:34:56.049862 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:56.050390 containerd[1668]: time="2026-01-23T17:34:56.050261582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:34:56.050903 kubelet[2951]: E0123 17:34:56.050576 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btn72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6686778f54-qdppv_calico-apiserver(287c72fe-ff92-46ec-9e19-273465100dda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:56.052467 kubelet[2951]: E0123 17:34:56.052398 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:34:56.418443 containerd[1668]: time="2026-01-23T17:34:56.418300389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:56.419657 containerd[1668]: time="2026-01-23T17:34:56.419610835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:34:56.419807 containerd[1668]: time="2026-01-23T17:34:56.419646755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:56.419841 kubelet[2951]: E0123 17:34:56.419807 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:34:56.420079 kubelet[2951]: E0123 17:34:56.419854 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:34:56.420079 kubelet[2951]: E0123 17:34:56.420032 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67qtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589548c48f-dtbrr_calico-system(de63b504-5477-4705-aea8-65396b064e08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:56.420598 containerd[1668]: time="2026-01-23T17:34:56.420557200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:34:56.421781 kubelet[2951]: E0123 17:34:56.421699 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:34:56.748403 containerd[1668]: time="2026-01-23T17:34:56.748322369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:56.749806 containerd[1668]: time="2026-01-23T17:34:56.749740136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:34:56.749897 containerd[1668]: time="2026-01-23T17:34:56.749836696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:56.750177 kubelet[2951]: E0123 17:34:56.750121 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:34:56.750177 kubelet[2951]: E0123 17:34:56.750174 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:34:56.750319 kubelet[2951]: E0123 17:34:56.750282 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:56.751691 kubelet[2951]: E0123 17:34:56.751654 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:34:56.950262 containerd[1668]: time="2026-01-23T17:34:56.949912884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:34:57.276530 containerd[1668]: time="2026-01-23T17:34:57.276445897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:57.277715 containerd[1668]: time="2026-01-23T17:34:57.277679823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:34:57.277787 containerd[1668]: time="2026-01-23T17:34:57.277719543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:57.277967 kubelet[2951]: E0123 17:34:57.277924 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:34:57.278019 kubelet[2951]: E0123 17:34:57.277980 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:34:57.278443 kubelet[2951]: E0123 17:34:57.278111 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88zgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hp496_calico-system(49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:57.279309 kubelet[2951]: E0123 17:34:57.279274 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:34:59.485763 systemd[1]: Started sshd@12-10.0.6.147:22-4.153.228.146:42554.service - OpenSSH per-connection server daemon (4.153.228.146:42554). Jan 23 17:34:59.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.6.147:22-4.153.228.146:42554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:59.486912 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:34:59.486978 kernel: audit: type=1130 audit(1769189699.485:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.6.147:22-4.153.228.146:42554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:59.949044 containerd[1668]: time="2026-01-23T17:34:59.948927476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:35:00.023000 audit[5300]: USER_ACCT pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.024137 sshd[5300]: Accepted publickey for core from 4.153.228.146 port 42554 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:00.025000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.028343 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:00.029456 kernel: audit: type=1101 audit(1769189700.023:752): pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.029511 kernel: audit: type=1103 audit(1769189700.025:753): pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.029530 kernel: audit: type=1006 audit(1769189700.025:754): pid=5300 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 23 17:35:00.025000 audit[5300]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff702f950 a2=3 a3=0 items=0 ppid=1 pid=5300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:00.034144 kernel: audit: type=1300 audit(1769189700.025:754): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff702f950 a2=3 a3=0 items=0 ppid=1 pid=5300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:00.034330 kernel: audit: type=1327 audit(1769189700.025:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:00.025000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:00.037822 systemd-logind[1643]: New session 14 of user core. Jan 23 17:35:00.044954 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 17:35:00.046000 audit[5300]: USER_START pid=5300 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.050000 audit[5304]: CRED_ACQ pid=5304 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.054190 kernel: audit: type=1105 audit(1769189700.046:755): pid=5300 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.054248 kernel: audit: type=1103 audit(1769189700.050:756): pid=5304 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.294446 containerd[1668]: time="2026-01-23T17:35:00.294280812Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:35:00.295784 containerd[1668]: time="2026-01-23T17:35:00.295730619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:35:00.295999 containerd[1668]: time="2026-01-23T17:35:00.295831459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:35:00.296939 kubelet[2951]: E0123 17:35:00.296141 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:35:00.296939 kubelet[2951]: E0123 17:35:00.296185 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:35:00.296939 kubelet[2951]: E0123 17:35:00.296303 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqd9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6686778f54-7m9rh_calico-apiserver(0cda543d-2a18-4d2d-aa42-b11c9c8288f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:35:00.297973 kubelet[2951]: E0123 17:35:00.297934 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:35:00.375024 sshd[5304]: Connection closed by 4.153.228.146 port 42554 Jan 23 17:35:00.375549 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:00.376000 audit[5300]: USER_END pid=5300 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.379666 systemd[1]: sshd@12-10.0.6.147:22-4.153.228.146:42554.service: Deactivated successfully. Jan 23 17:35:00.376000 audit[5300]: CRED_DISP pid=5300 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.381445 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 17:35:00.383009 kernel: audit: type=1106 audit(1769189700.376:757): pid=5300 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.383116 kernel: audit: type=1104 audit(1769189700.376:758): pid=5300 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:00.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.6.147:22-4.153.228.146:42554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:00.383987 systemd-logind[1643]: Session 14 logged out. Waiting for processes to exit. Jan 23 17:35:00.385601 systemd-logind[1643]: Removed session 14. Jan 23 17:35:00.484457 systemd[1]: Started sshd@13-10.0.6.147:22-4.153.228.146:42564.service - OpenSSH per-connection server daemon (4.153.228.146:42564). Jan 23 17:35:00.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.6.147:22-4.153.228.146:42564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:01.025000 audit[5319]: USER_ACCT pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:01.028021 sshd[5319]: Accepted publickey for core from 4.153.228.146 port 42564 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:01.029000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:01.029000 audit[5319]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda85ec50 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:01.029000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:01.031085 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:01.040354 systemd-logind[1643]: New session 15 of user core. Jan 23 17:35:01.045004 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 17:35:01.046000 audit[5319]: USER_START pid=5319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:01.049000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:01.426049 sshd[5323]: Connection closed by 4.153.228.146 port 42564 Jan 23 17:35:01.425376 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:01.426000 audit[5319]: USER_END pid=5319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:01.426000 audit[5319]: CRED_DISP pid=5319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:01.429885 systemd[1]: sshd@13-10.0.6.147:22-4.153.228.146:42564.service: Deactivated successfully. Jan 23 17:35:01.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.6.147:22-4.153.228.146:42564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:01.431520 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 17:35:01.432243 systemd-logind[1643]: Session 15 logged out. Waiting for processes to exit. Jan 23 17:35:01.434594 systemd-logind[1643]: Removed session 15. Jan 23 17:35:01.542827 systemd[1]: Started sshd@14-10.0.6.147:22-4.153.228.146:42580.service - OpenSSH per-connection server daemon (4.153.228.146:42580). Jan 23 17:35:01.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.6.147:22-4.153.228.146:42580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:02.080000 audit[5334]: USER_ACCT pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:02.081730 sshd[5334]: Accepted publickey for core from 4.153.228.146 port 42580 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:02.081000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:02.081000 audit[5334]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc34f6f60 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:02.081000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:02.083394 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:02.087110 systemd-logind[1643]: New session 16 of user core. Jan 23 17:35:02.100074 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 17:35:02.102000 audit[5334]: USER_START pid=5334 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:02.104000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:02.429743 sshd[5345]: Connection closed by 4.153.228.146 port 42580 Jan 23 17:35:02.430073 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:02.430000 audit[5334]: USER_END pid=5334 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:02.430000 audit[5334]: CRED_DISP pid=5334 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:02.434487 systemd-logind[1643]: Session 16 logged out. Waiting for processes to exit. Jan 23 17:35:02.434732 systemd[1]: sshd@14-10.0.6.147:22-4.153.228.146:42580.service: Deactivated successfully. Jan 23 17:35:02.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.6.147:22-4.153.228.146:42580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:02.436530 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 17:35:02.438086 systemd-logind[1643]: Removed session 16. Jan 23 17:35:06.951534 containerd[1668]: time="2026-01-23T17:35:06.951025224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:35:07.306075 containerd[1668]: time="2026-01-23T17:35:07.305897087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:35:07.310780 containerd[1668]: time="2026-01-23T17:35:07.310205348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:35:07.310780 containerd[1668]: time="2026-01-23T17:35:07.310309069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:35:07.310925 kubelet[2951]: E0123 17:35:07.310494 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:35:07.310925 kubelet[2951]: E0123 17:35:07.310547 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:35:07.310925 kubelet[2951]: E0123 17:35:07.310686 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f28f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f66fb8c4-942js_calico-system(e9193ceb-0e99-470a-bb0b-413b40f3616d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:35:07.311887 kubelet[2951]: E0123 17:35:07.311825 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:35:07.539523 systemd[1]: Started sshd@15-10.0.6.147:22-4.153.228.146:55424.service - OpenSSH per-connection server daemon (4.153.228.146:55424). Jan 23 17:35:07.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.6.147:22-4.153.228.146:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:07.542693 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 17:35:07.542773 kernel: audit: type=1130 audit(1769189707.538:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.6.147:22-4.153.228.146:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:07.949279 kubelet[2951]: E0123 17:35:07.949232 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:35:08.072000 audit[5360]: USER_ACCT pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.076000 sshd[5360]: Accepted publickey for core from 4.153.228.146 port 55424 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:08.075000 audit[5360]: CRED_ACQ pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.077321 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:08.079131 kernel: audit: type=1101 audit(1769189708.072:779): pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.079197 kernel: audit: type=1103 audit(1769189708.075:780): pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.081195 kernel: audit: type=1006 audit(1769189708.075:781): pid=5360 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 23 17:35:08.081269 kernel: audit: type=1300 audit(1769189708.075:781): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff09a5d50 a2=3 a3=0 items=0 ppid=1 pid=5360 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:08.075000 audit[5360]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff09a5d50 a2=3 a3=0 items=0 ppid=1 pid=5360 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:08.081936 systemd-logind[1643]: New session 17 of user core. Jan 23 17:35:08.075000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:08.085786 kernel: audit: type=1327 audit(1769189708.075:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:08.093001 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 17:35:08.095000 audit[5360]: USER_START pid=5360 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.099000 audit[5368]: CRED_ACQ pid=5368 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.103274 kernel: audit: type=1105 audit(1769189708.095:782): pid=5360 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.103360 kernel: audit: type=1103 audit(1769189708.099:783): pid=5368 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.424193 sshd[5368]: Connection closed by 4.153.228.146 port 55424 Jan 23 17:35:08.424718 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:08.425000 audit[5360]: USER_END pid=5360 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.426000 audit[5360]: CRED_DISP pid=5360 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.429972 systemd-logind[1643]: Session 17 logged out. Waiting for processes to exit. Jan 23 17:35:08.430206 systemd[1]: sshd@15-10.0.6.147:22-4.153.228.146:55424.service: Deactivated successfully. Jan 23 17:35:08.432093 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 17:35:08.432234 kernel: audit: type=1106 audit(1769189708.425:784): pid=5360 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.432269 kernel: audit: type=1104 audit(1769189708.426:785): pid=5360 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:08.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.6.147:22-4.153.228.146:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:08.435141 systemd-logind[1643]: Removed session 17. Jan 23 17:35:08.950219 kubelet[2951]: E0123 17:35:08.949923 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:35:08.952776 kubelet[2951]: E0123 17:35:08.952409 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:35:10.951236 kubelet[2951]: E0123 17:35:10.951189 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:35:13.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.6.147:22-4.153.228.146:55430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:13.536574 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:35:13.536617 kernel: audit: type=1130 audit(1769189713.532:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.6.147:22-4.153.228.146:55430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:13.533068 systemd[1]: Started sshd@16-10.0.6.147:22-4.153.228.146:55430.service - OpenSSH per-connection server daemon (4.153.228.146:55430). Jan 23 17:35:13.949703 kubelet[2951]: E0123 17:35:13.949567 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:35:14.075000 audit[5381]: USER_ACCT pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.079852 sshd[5381]: Accepted publickey for core from 4.153.228.146 port 55430 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:14.080000 audit[5381]: CRED_ACQ pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.082249 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:14.084164 kernel: audit: type=1101 audit(1769189714.075:788): pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.084230 kernel: audit: type=1103 audit(1769189714.080:789): pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.086364 kernel: audit: type=1006 audit(1769189714.080:790): pid=5381 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 23 17:35:14.086420 kernel: audit: type=1300 audit(1769189714.080:790): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffeec7940 a2=3 a3=0 items=0 ppid=1 pid=5381 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:14.080000 audit[5381]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffeec7940 a2=3 a3=0 items=0 ppid=1 pid=5381 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:14.080000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:14.090907 kernel: audit: type=1327 audit(1769189714.080:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:14.093290 systemd-logind[1643]: New session 18 of user core. Jan 23 17:35:14.100001 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 17:35:14.102000 audit[5381]: USER_START pid=5381 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.104000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.109287 kernel: audit: type=1105 audit(1769189714.102:791): pid=5381 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.109361 kernel: audit: type=1103 audit(1769189714.104:792): pid=5385 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.456776 sshd[5385]: Connection closed by 4.153.228.146 port 55430 Jan 23 17:35:14.456607 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:14.457000 audit[5381]: USER_END pid=5381 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.461326 systemd[1]: sshd@16-10.0.6.147:22-4.153.228.146:55430.service: Deactivated successfully. Jan 23 17:35:14.457000 audit[5381]: CRED_DISP pid=5381 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.463139 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 17:35:14.464374 kernel: audit: type=1106 audit(1769189714.457:793): pid=5381 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.464444 kernel: audit: type=1104 audit(1769189714.457:794): pid=5381 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:14.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.6.147:22-4.153.228.146:55430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:14.465468 systemd-logind[1643]: Session 18 logged out. Waiting for processes to exit. Jan 23 17:35:14.466443 systemd-logind[1643]: Removed session 18. Jan 23 17:35:18.955565 kubelet[2951]: E0123 17:35:18.955519 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:35:19.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.6.147:22-4.153.228.146:44718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:19.563698 systemd[1]: Started sshd@17-10.0.6.147:22-4.153.228.146:44718.service - OpenSSH per-connection server daemon (4.153.228.146:44718). Jan 23 17:35:19.567101 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:35:19.567164 kernel: audit: type=1130 audit(1769189719.563:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.6.147:22-4.153.228.146:44718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:20.077000 audit[5400]: USER_ACCT pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.078295 sshd[5400]: Accepted publickey for core from 4.153.228.146 port 44718 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:20.079000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.083682 kernel: audit: type=1101 audit(1769189720.077:797): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.083726 kernel: audit: type=1103 audit(1769189720.079:798): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.083746 kernel: audit: type=1006 audit(1769189720.079:799): pid=5400 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 23 17:35:20.081061 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:20.079000 audit[5400]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf855490 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:20.088765 kernel: audit: type=1300 audit(1769189720.079:799): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf855490 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:20.079000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:20.089976 kernel: audit: type=1327 audit(1769189720.079:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:20.093167 systemd-logind[1643]: New session 19 of user core. Jan 23 17:35:20.108000 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 17:35:20.109000 audit[5400]: USER_START pid=5400 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.111000 audit[5404]: CRED_ACQ pid=5404 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.116169 kernel: audit: type=1105 audit(1769189720.109:800): pid=5400 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.116296 kernel: audit: type=1103 audit(1769189720.111:801): pid=5404 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.427648 sshd[5404]: Connection closed by 4.153.228.146 port 44718 Jan 23 17:35:20.427895 sshd-session[5400]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:20.429000 audit[5400]: USER_END pid=5400 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.433206 systemd[1]: sshd@17-10.0.6.147:22-4.153.228.146:44718.service: Deactivated successfully. Jan 23 17:35:20.429000 audit[5400]: CRED_DISP pid=5400 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.435108 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 17:35:20.435919 systemd-logind[1643]: Session 19 logged out. Waiting for processes to exit. Jan 23 17:35:20.436194 kernel: audit: type=1106 audit(1769189720.429:802): pid=5400 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.436306 kernel: audit: type=1104 audit(1769189720.429:803): pid=5400 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:20.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.6.147:22-4.153.228.146:44718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:20.437399 systemd-logind[1643]: Removed session 19. Jan 23 17:35:20.542710 systemd[1]: Started sshd@18-10.0.6.147:22-4.153.228.146:44728.service - OpenSSH per-connection server daemon (4.153.228.146:44728). Jan 23 17:35:20.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.6.147:22-4.153.228.146:44728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:20.949786 kubelet[2951]: E0123 17:35:20.949445 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:35:21.088000 audit[5418]: USER_ACCT pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:21.089834 sshd[5418]: Accepted publickey for core from 4.153.228.146 port 44728 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:21.089000 audit[5418]: CRED_ACQ pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:21.089000 audit[5418]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7773e50 a2=3 a3=0 items=0 ppid=1 pid=5418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:21.089000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:21.091544 sshd-session[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:21.097820 systemd-logind[1643]: New session 20 of user core. Jan 23 17:35:21.098135 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 17:35:21.100000 audit[5418]: USER_START pid=5418 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:21.103000 audit[5441]: CRED_ACQ pid=5441 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:21.507399 sshd[5441]: Connection closed by 4.153.228.146 port 44728 Jan 23 17:35:21.507833 sshd-session[5418]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:21.508000 audit[5418]: USER_END pid=5418 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:21.508000 audit[5418]: CRED_DISP pid=5418 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:21.511692 systemd[1]: sshd@18-10.0.6.147:22-4.153.228.146:44728.service: Deactivated successfully. Jan 23 17:35:21.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.6.147:22-4.153.228.146:44728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:21.513478 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 17:35:21.514926 systemd-logind[1643]: Session 20 logged out. Waiting for processes to exit. Jan 23 17:35:21.515741 systemd-logind[1643]: Removed session 20. Jan 23 17:35:21.619081 systemd[1]: Started sshd@19-10.0.6.147:22-4.153.228.146:44734.service - OpenSSH per-connection server daemon (4.153.228.146:44734). Jan 23 17:35:21.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.6.147:22-4.153.228.146:44734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:21.950205 kubelet[2951]: E0123 17:35:21.949660 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:35:21.950205 kubelet[2951]: E0123 17:35:21.949809 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:35:22.159000 audit[5459]: USER_ACCT pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:22.161089 sshd[5459]: Accepted publickey for core from 4.153.228.146 port 44734 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:22.161000 audit[5459]: CRED_ACQ pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:22.161000 audit[5459]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff07a8c30 a2=3 a3=0 items=0 ppid=1 pid=5459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:22.161000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:22.162905 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:22.171035 systemd-logind[1643]: New session 21 of user core. Jan 23 17:35:22.177949 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 17:35:22.179000 audit[5459]: USER_START pid=5459 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:22.181000 audit[5463]: CRED_ACQ pid=5463 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:22.951619 kubelet[2951]: E0123 17:35:22.951549 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:35:22.951000 audit[5475]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:35:22.951000 audit[5475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcfa30210 a2=0 a3=1 items=0 ppid=3062 pid=5475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:22.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:35:22.960000 audit[5475]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:35:22.960000 audit[5475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcfa30210 a2=0 a3=1 items=0 ppid=3062 pid=5475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:22.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:35:22.980000 audit[5477]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=5477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:35:22.980000 audit[5477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd5e85990 a2=0 a3=1 items=0 ppid=3062 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:22.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:35:22.985000 audit[5477]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:35:22.985000 audit[5477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd5e85990 a2=0 a3=1 items=0 ppid=3062 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:22.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:35:23.060926 sshd[5463]: Connection closed by 4.153.228.146 port 44734 Jan 23 17:35:23.061467 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:23.062000 audit[5459]: USER_END pid=5459 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:23.062000 audit[5459]: CRED_DISP pid=5459 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:23.066007 systemd[1]: sshd@19-10.0.6.147:22-4.153.228.146:44734.service: Deactivated successfully. Jan 23 17:35:23.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.6.147:22-4.153.228.146:44734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:23.068925 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 17:35:23.070846 systemd-logind[1643]: Session 21 logged out. Waiting for processes to exit. Jan 23 17:35:23.074029 systemd-logind[1643]: Removed session 21. Jan 23 17:35:23.170102 systemd[1]: Started sshd@20-10.0.6.147:22-4.153.228.146:44742.service - OpenSSH per-connection server daemon (4.153.228.146:44742). Jan 23 17:35:23.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.6.147:22-4.153.228.146:44742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:23.711000 audit[5482]: USER_ACCT pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:23.712522 sshd[5482]: Accepted publickey for core from 4.153.228.146 port 44742 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:23.713000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:23.713000 audit[5482]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff5ecbed0 a2=3 a3=0 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:23.713000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:23.714791 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:23.719820 systemd-logind[1643]: New session 22 of user core. Jan 23 17:35:23.725938 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 17:35:23.727000 audit[5482]: USER_START pid=5482 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:23.729000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.222290 sshd[5486]: Connection closed by 4.153.228.146 port 44742 Jan 23 17:35:24.223005 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:24.224000 audit[5482]: USER_END pid=5482 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.225000 audit[5482]: CRED_DISP pid=5482 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.228725 systemd[1]: sshd@20-10.0.6.147:22-4.153.228.146:44742.service: Deactivated successfully. Jan 23 17:35:24.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.6.147:22-4.153.228.146:44742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:24.230481 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 17:35:24.232483 systemd-logind[1643]: Session 22 logged out. Waiting for processes to exit. Jan 23 17:35:24.233349 systemd-logind[1643]: Removed session 22. Jan 23 17:35:24.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.6.147:22-4.153.228.146:44756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:24.328124 systemd[1]: Started sshd@21-10.0.6.147:22-4.153.228.146:44756.service - OpenSSH per-connection server daemon (4.153.228.146:44756). Jan 23 17:35:24.856259 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 23 17:35:24.856355 kernel: audit: type=1101 audit(1769189724.851:837): pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.851000 audit[5498]: USER_ACCT pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.856504 sshd[5498]: Accepted publickey for core from 4.153.228.146 port 44756 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:24.855000 audit[5498]: CRED_ACQ pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.857524 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:24.859553 kernel: audit: type=1103 audit(1769189724.855:838): pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.861338 kernel: audit: type=1006 audit(1769189724.856:839): pid=5498 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 23 17:35:24.856000 audit[5498]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd0086bf0 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:24.864428 kernel: audit: type=1300 audit(1769189724.856:839): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd0086bf0 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:24.856000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:24.865791 kernel: audit: type=1327 audit(1769189724.856:839): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:24.867252 systemd-logind[1643]: New session 23 of user core. Jan 23 17:35:24.877255 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 17:35:24.879000 audit[5498]: USER_START pid=5498 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.883000 audit[5502]: CRED_ACQ pid=5502 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.887346 kernel: audit: type=1105 audit(1769189724.879:840): pid=5498 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:24.887439 kernel: audit: type=1103 audit(1769189724.883:841): pid=5502 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:25.206840 sshd[5502]: Connection closed by 4.153.228.146 port 44756 Jan 23 17:35:25.206336 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:25.208000 audit[5498]: USER_END pid=5498 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:25.213514 systemd[1]: sshd@21-10.0.6.147:22-4.153.228.146:44756.service: Deactivated successfully. Jan 23 17:35:25.208000 audit[5498]: CRED_DISP pid=5498 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:25.217330 kernel: audit: type=1106 audit(1769189725.208:842): pid=5498 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:25.217388 kernel: audit: type=1104 audit(1769189725.208:843): pid=5498 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:25.219105 kernel: audit: type=1131 audit(1769189725.212:844): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.6.147:22-4.153.228.146:44756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:25.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.6.147:22-4.153.228.146:44756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:25.215208 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 17:35:25.217246 systemd-logind[1643]: Session 23 logged out. Waiting for processes to exit. Jan 23 17:35:25.220397 systemd-logind[1643]: Removed session 23. Jan 23 17:35:27.948966 kubelet[2951]: E0123 17:35:27.948880 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:35:28.083000 audit[5515]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:35:28.083000 audit[5515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff23cdaf0 a2=0 a3=1 items=0 ppid=3062 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:28.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:35:28.092000 audit[5515]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:35:28.092000 audit[5515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff23cdaf0 a2=0 a3=1 items=0 ppid=3062 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:28.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:35:30.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.6.147:22-4.153.228.146:49188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:30.320305 systemd[1]: Started sshd@22-10.0.6.147:22-4.153.228.146:49188.service - OpenSSH per-connection server daemon (4.153.228.146:49188). Jan 23 17:35:30.321118 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 23 17:35:30.321165 kernel: audit: type=1130 audit(1769189730.319:847): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.6.147:22-4.153.228.146:49188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:30.847000 audit[5518]: USER_ACCT pid=5518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:30.848000 sshd[5518]: Accepted publickey for core from 4.153.228.146 port 49188 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:30.851780 kernel: audit: type=1101 audit(1769189730.847:848): pid=5518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:30.851000 audit[5518]: CRED_ACQ pid=5518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:30.853040 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:30.857099 kernel: audit: type=1103 audit(1769189730.851:849): pid=5518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:30.857178 kernel: audit: type=1006 audit(1769189730.851:850): pid=5518 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 23 17:35:30.851000 audit[5518]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4520990 a2=3 a3=0 items=0 ppid=1 pid=5518 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:30.859882 systemd-logind[1643]: New session 24 of user core. Jan 23 17:35:30.860605 kernel: audit: type=1300 audit(1769189730.851:850): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4520990 a2=3 a3=0 items=0 ppid=1 pid=5518 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:30.860646 kernel: audit: type=1327 audit(1769189730.851:850): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:30.851000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:30.868951 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 17:35:30.870000 audit[5518]: USER_START pid=5518 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:30.872000 audit[5522]: CRED_ACQ pid=5522 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:30.877548 kernel: audit: type=1105 audit(1769189730.870:851): pid=5518 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:30.877619 kernel: audit: type=1103 audit(1769189730.872:852): pid=5522 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:30.949023 kubelet[2951]: E0123 17:35:30.948931 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:35:31.210961 sshd[5522]: Connection closed by 4.153.228.146 port 49188 Jan 23 17:35:31.211190 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:31.211000 audit[5518]: USER_END pid=5518 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:31.211000 audit[5518]: CRED_DISP pid=5518 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:31.216965 systemd[1]: sshd@22-10.0.6.147:22-4.153.228.146:49188.service: Deactivated successfully. Jan 23 17:35:31.218536 kernel: audit: type=1106 audit(1769189731.211:853): pid=5518 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:31.218611 kernel: audit: type=1104 audit(1769189731.211:854): pid=5518 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:31.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.6.147:22-4.153.228.146:49188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:31.219247 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 17:35:31.220981 systemd-logind[1643]: Session 24 logged out. Waiting for processes to exit. Jan 23 17:35:31.222177 systemd-logind[1643]: Removed session 24. Jan 23 17:35:32.957006 kubelet[2951]: E0123 17:35:32.956954 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:35:33.948987 kubelet[2951]: E0123 17:35:33.948913 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:35:35.949264 kubelet[2951]: E0123 17:35:35.949153 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:35:36.328450 systemd[1]: Started sshd@23-10.0.6.147:22-4.153.228.146:39600.service - OpenSSH per-connection server daemon (4.153.228.146:39600). Jan 23 17:35:36.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.6.147:22-4.153.228.146:39600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:36.331774 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:35:36.331831 kernel: audit: type=1130 audit(1769189736.328:856): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.6.147:22-4.153.228.146:39600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:36.867000 audit[5535]: USER_ACCT pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:36.868172 sshd[5535]: Accepted publickey for core from 4.153.228.146 port 39600 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:36.870000 audit[5535]: CRED_ACQ pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:36.871865 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:36.873792 kernel: audit: type=1101 audit(1769189736.867:857): pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:36.873858 kernel: audit: type=1103 audit(1769189736.870:858): pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:36.875535 kernel: audit: type=1006 audit(1769189736.870:859): pid=5535 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 23 17:35:36.870000 audit[5535]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdec47e30 a2=3 a3=0 items=0 ppid=1 pid=5535 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:36.879272 kernel: audit: type=1300 audit(1769189736.870:859): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdec47e30 a2=3 a3=0 items=0 ppid=1 pid=5535 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:36.879381 kernel: audit: type=1327 audit(1769189736.870:859): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:36.870000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:36.884735 systemd-logind[1643]: New session 25 of user core. Jan 23 17:35:36.892987 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 17:35:36.895000 audit[5535]: USER_START pid=5535 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:36.898000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:36.901741 kernel: audit: type=1105 audit(1769189736.895:860): pid=5535 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:36.901822 kernel: audit: type=1103 audit(1769189736.898:861): pid=5539 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:36.954413 kubelet[2951]: E0123 17:35:36.954360 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:35:37.223732 sshd[5539]: Connection closed by 4.153.228.146 port 39600 Jan 23 17:35:37.224366 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:37.224000 audit[5535]: USER_END pid=5535 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:37.229015 systemd[1]: sshd@23-10.0.6.147:22-4.153.228.146:39600.service: Deactivated successfully. Jan 23 17:35:37.224000 audit[5535]: CRED_DISP pid=5535 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:37.231004 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 17:35:37.232380 kernel: audit: type=1106 audit(1769189737.224:862): pid=5535 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:37.232440 kernel: audit: type=1104 audit(1769189737.224:863): pid=5535 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:37.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.6.147:22-4.153.228.146:39600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:37.232975 systemd-logind[1643]: Session 25 logged out. Waiting for processes to exit. Jan 23 17:35:37.234363 systemd-logind[1643]: Removed session 25. Jan 23 17:35:40.949115 kubelet[2951]: E0123 17:35:40.949020 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:35:42.330146 systemd[1]: Started sshd@24-10.0.6.147:22-4.153.228.146:39614.service - OpenSSH per-connection server daemon (4.153.228.146:39614). Jan 23 17:35:42.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.6.147:22-4.153.228.146:39614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:42.331034 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:35:42.331076 kernel: audit: type=1130 audit(1769189742.329:865): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.6.147:22-4.153.228.146:39614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:42.871306 sshd[5555]: Accepted publickey for core from 4.153.228.146 port 39614 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:42.870000 audit[5555]: USER_ACCT pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:42.874779 kernel: audit: type=1101 audit(1769189742.870:866): pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:42.874000 audit[5555]: CRED_ACQ pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:42.878301 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:42.880017 kernel: audit: type=1103 audit(1769189742.874:867): pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:42.880082 kernel: audit: type=1006 audit(1769189742.876:868): pid=5555 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 23 17:35:42.876000 audit[5555]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe26d0b0 a2=3 a3=0 items=0 ppid=1 pid=5555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:42.883231 kernel: audit: type=1300 audit(1769189742.876:868): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe26d0b0 a2=3 a3=0 items=0 ppid=1 pid=5555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:42.883303 kernel: audit: type=1327 audit(1769189742.876:868): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:42.876000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:42.887750 systemd-logind[1643]: New session 26 of user core. Jan 23 17:35:42.895988 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 17:35:42.898000 audit[5555]: USER_START pid=5555 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:42.900000 audit[5559]: CRED_ACQ pid=5559 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:42.905778 kernel: audit: type=1105 audit(1769189742.898:869): pid=5555 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:42.905859 kernel: audit: type=1103 audit(1769189742.900:870): pid=5559 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:43.232677 sshd[5559]: Connection closed by 4.153.228.146 port 39614 Jan 23 17:35:43.233959 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:43.235000 audit[5555]: USER_END pid=5555 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:43.235000 audit[5555]: CRED_DISP pid=5555 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:43.242799 kernel: audit: type=1106 audit(1769189743.235:871): pid=5555 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:43.242980 kernel: audit: type=1104 audit(1769189743.235:872): pid=5555 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:43.241553 systemd[1]: sshd@24-10.0.6.147:22-4.153.228.146:39614.service: Deactivated successfully. Jan 23 17:35:43.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.6.147:22-4.153.228.146:39614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:43.245509 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 17:35:43.247265 systemd-logind[1643]: Session 26 logged out. Waiting for processes to exit. Jan 23 17:35:43.250801 systemd-logind[1643]: Removed session 26. Jan 23 17:35:43.949280 kubelet[2951]: E0123 17:35:43.949238 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:35:44.948974 kubelet[2951]: E0123 17:35:44.948571 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:35:45.948593 kubelet[2951]: E0123 17:35:45.948476 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:35:48.350802 systemd[1]: Started sshd@25-10.0.6.147:22-4.153.228.146:56920.service - OpenSSH per-connection server daemon (4.153.228.146:56920). Jan 23 17:35:48.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.6.147:22-4.153.228.146:56920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:48.353838 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:35:48.353950 kernel: audit: type=1130 audit(1769189748.350:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.6.147:22-4.153.228.146:56920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:48.895000 audit[5575]: USER_ACCT pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:48.896205 sshd[5575]: Accepted publickey for core from 4.153.228.146 port 56920 ssh2: RSA SHA256:W4H1ZAj5T8TSA1hLSDGRkaV3wBweA5tg1dlTi453QCw Jan 23 17:35:48.898000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:48.900057 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:48.901890 kernel: audit: type=1101 audit(1769189748.895:875): pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:48.902013 kernel: audit: type=1103 audit(1769189748.898:876): pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:48.902083 kernel: audit: type=1006 audit(1769189748.898:877): pid=5575 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 23 17:35:48.903440 kernel: audit: type=1300 audit(1769189748.898:877): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcecd3e30 a2=3 a3=0 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:48.898000 audit[5575]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcecd3e30 a2=3 a3=0 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:48.906713 kernel: audit: type=1327 audit(1769189748.898:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:48.898000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:48.907428 systemd-logind[1643]: New session 27 of user core. Jan 23 17:35:48.913179 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 17:35:48.917000 audit[5575]: USER_START pid=5575 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:48.918000 audit[5579]: CRED_ACQ pid=5579 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:48.923867 kernel: audit: type=1105 audit(1769189748.917:878): pid=5575 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:48.923969 kernel: audit: type=1103 audit(1769189748.918:879): pid=5579 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:49.266677 sshd[5579]: Connection closed by 4.153.228.146 port 56920 Jan 23 17:35:49.266971 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:49.268000 audit[5575]: USER_END pid=5575 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:49.272112 systemd[1]: sshd@25-10.0.6.147:22-4.153.228.146:56920.service: Deactivated successfully. Jan 23 17:35:49.268000 audit[5575]: CRED_DISP pid=5575 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:49.274303 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 17:35:49.275403 systemd-logind[1643]: Session 27 logged out. Waiting for processes to exit. Jan 23 17:35:49.275667 kernel: audit: type=1106 audit(1769189749.268:880): pid=5575 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:49.275712 kernel: audit: type=1104 audit(1769189749.268:881): pid=5575 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 23 17:35:49.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.6.147:22-4.153.228.146:56920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:49.277001 systemd-logind[1643]: Removed session 27. Jan 23 17:35:49.949157 kubelet[2951]: E0123 17:35:49.949103 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:35:50.952361 kubelet[2951]: E0123 17:35:50.952298 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:35:54.951291 kubelet[2951]: E0123 17:35:54.951155 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:35:57.948810 kubelet[2951]: E0123 17:35:57.948766 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:35:57.949209 kubelet[2951]: E0123 17:35:57.948857 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:35:58.949631 kubelet[2951]: E0123 17:35:58.949300 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:36:00.950611 kubelet[2951]: E0123 17:36:00.950550 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:36:02.950721 kubelet[2951]: E0123 17:36:02.950589 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:36:05.948314 kubelet[2951]: E0123 17:36:05.948267 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:36:08.950545 kubelet[2951]: E0123 17:36:08.950498 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:36:09.949584 kubelet[2951]: E0123 17:36:09.949499 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:36:09.949584 kubelet[2951]: E0123 17:36:09.949527 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:36:13.949441 kubelet[2951]: E0123 17:36:13.949346 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007" Jan 23 17:36:14.949927 kubelet[2951]: E0123 17:36:14.949568 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589548c48f-dtbrr" podUID="de63b504-5477-4705-aea8-65396b064e08" Jan 23 17:36:19.949273 kubelet[2951]: E0123 17:36:19.949228 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-7m9rh" podUID="0cda543d-2a18-4d2d-aa42-b11c9c8288f8" Jan 23 17:36:20.392179 kubelet[2951]: E0123 17:36:20.391941 2951 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.6.147:37316->10.0.6.231:2379: read: connection timed out" event="&Event{ObjectMeta:{goldmane-666569f655-hp496.188d6c936e935b41 calico-system 1741 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-hp496,UID:49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded,APIVersion:v1,ResourceVersion:810,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4547-1-0-4-2c8b61c80e,},FirstTimestamp:2026-01-23 17:33:29 +0000 UTC,LastTimestamp:2026-01-23 17:36:09.949445461 +0000 UTC m=+211.074750995,Count:12,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-1-0-4-2c8b61c80e,}" Jan 23 17:36:20.949179 containerd[1668]: time="2026-01-23T17:36:20.949131643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:36:21.063274 systemd[1]: cri-containerd-19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c.scope: Deactivated successfully. Jan 23 17:36:21.063742 systemd[1]: cri-containerd-19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c.scope: Consumed 36.638s CPU time, 101.8M memory peak. Jan 23 17:36:21.065131 containerd[1668]: time="2026-01-23T17:36:21.065093413Z" level=info msg="received container exit event container_id:\"19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c\" id:\"19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c\" pid:3283 exit_status:1 exited_at:{seconds:1769189781 nanos:64786411}" Jan 23 17:36:21.066000 audit: BPF prog-id=146 op=UNLOAD Jan 23 17:36:21.069267 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:36:21.069342 kernel: audit: type=1334 audit(1769189781.066:883): prog-id=146 op=UNLOAD Jan 23 17:36:21.069367 kernel: audit: type=1334 audit(1769189781.066:884): prog-id=150 op=UNLOAD Jan 23 17:36:21.066000 audit: BPF prog-id=150 op=UNLOAD Jan 23 17:36:21.088456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c-rootfs.mount: Deactivated successfully. Jan 23 17:36:21.291922 containerd[1668]: time="2026-01-23T17:36:21.291851446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:36:21.293495 containerd[1668]: time="2026-01-23T17:36:21.293455494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:36:21.293592 containerd[1668]: time="2026-01-23T17:36:21.293542375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:36:21.293736 kubelet[2951]: E0123 17:36:21.293697 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:36:21.294061 kubelet[2951]: E0123 17:36:21.293747 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:36:21.294091 kubelet[2951]: E0123 17:36:21.294024 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88zgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hp496_calico-system(49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:36:21.294172 containerd[1668]: time="2026-01-23T17:36:21.294052577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:36:21.295455 kubelet[2951]: E0123 17:36:21.295425 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hp496" podUID="49fbdce9-bd52-4d37-a0e6-5d7fa3a92ded" Jan 23 17:36:21.391585 systemd[1]: cri-containerd-607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea.scope: Deactivated successfully. Jan 23 17:36:21.392008 systemd[1]: cri-containerd-607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea.scope: Consumed 5.643s CPU time, 65.4M memory peak. Jan 23 17:36:21.391000 audit: BPF prog-id=256 op=LOAD Jan 23 17:36:21.392000 audit: BPF prog-id=88 op=UNLOAD Jan 23 17:36:21.394425 kernel: audit: type=1334 audit(1769189781.391:885): prog-id=256 op=LOAD Jan 23 17:36:21.394492 kernel: audit: type=1334 audit(1769189781.392:886): prog-id=88 op=UNLOAD Jan 23 17:36:21.394554 containerd[1668]: time="2026-01-23T17:36:21.394521190Z" level=info msg="received container exit event container_id:\"607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea\" id:\"607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea\" pid:2784 exit_status:1 exited_at:{seconds:1769189781 nanos:394148509}" Jan 23 17:36:21.396000 audit: BPF prog-id=103 op=UNLOAD Jan 23 17:36:21.396000 audit: BPF prog-id=107 op=UNLOAD Jan 23 17:36:21.399361 kernel: audit: type=1334 audit(1769189781.396:887): prog-id=103 op=UNLOAD Jan 23 17:36:21.399410 kernel: audit: type=1334 audit(1769189781.396:888): prog-id=107 op=UNLOAD Jan 23 17:36:21.415693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea-rootfs.mount: Deactivated successfully. Jan 23 17:36:21.464778 kubelet[2951]: I0123 17:36:21.464275 2951 scope.go:117] "RemoveContainer" containerID="607d8d7559e018114ac529135be3e831611f5b5658835f5c1e91b98cdd3331ea" Jan 23 17:36:21.466920 containerd[1668]: time="2026-01-23T17:36:21.466888386Z" level=info msg="CreateContainer within sandbox \"47409e2447e6e4077f15a10fe271d1bc1294f0526e4276965472105749ff31c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 17:36:21.468316 kubelet[2951]: I0123 17:36:21.468123 2951 scope.go:117] "RemoveContainer" containerID="19dedadc5c35e3414a496ef74b1e810a7e93676e54c65ed9f38aa09ff5fff40c" Jan 23 17:36:21.470648 containerd[1668]: time="2026-01-23T17:36:21.470613604Z" level=info msg="CreateContainer within sandbox \"3e15772bbbb00a65e922a4e6722f642c8569c6d92df1a13b256c2063175ffe70\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 17:36:21.480767 kubelet[2951]: E0123 17:36:21.480617 2951 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.6.147:37490->10.0.6.231:2379: read: connection timed out" Jan 23 17:36:21.487836 containerd[1668]: time="2026-01-23T17:36:21.487435567Z" level=info msg="Container 2208272086383ea8eec44a1da4dfc19a0f5ce486d0515e5f30e671837cb19144: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:36:21.489268 containerd[1668]: time="2026-01-23T17:36:21.489230936Z" level=info msg="received container exit event container_id:\"4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be\" id:\"4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be\" pid:2804 exit_status:1 exited_at:{seconds:1769189781 nanos:488045370}" Jan 23 17:36:21.493956 containerd[1668]: time="2026-01-23T17:36:21.493905079Z" level=info msg="Container 60c40fca4387094175058b2dd81bc1fdc87835a5e9ff9c007d2a11aedcda7173: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:36:21.494263 systemd[1]: cri-containerd-4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be.scope: Deactivated successfully. Jan 23 17:36:21.494746 systemd[1]: cri-containerd-4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be.scope: Consumed 3.202s CPU time, 23.2M memory peak. Jan 23 17:36:21.495000 audit: BPF prog-id=257 op=LOAD Jan 23 17:36:21.495000 audit: BPF prog-id=93 op=UNLOAD Jan 23 17:36:21.498582 kernel: audit: type=1334 audit(1769189781.495:889): prog-id=257 op=LOAD Jan 23 17:36:21.498649 kernel: audit: type=1334 audit(1769189781.495:890): prog-id=93 op=UNLOAD Jan 23 17:36:21.499000 audit: BPF prog-id=108 op=UNLOAD Jan 23 17:36:21.502265 kernel: audit: type=1334 audit(1769189781.499:891): prog-id=108 op=UNLOAD Jan 23 17:36:21.502342 kernel: audit: type=1334 audit(1769189781.499:892): prog-id=112 op=UNLOAD Jan 23 17:36:21.499000 audit: BPF prog-id=112 op=UNLOAD Jan 23 17:36:21.503372 containerd[1668]: time="2026-01-23T17:36:21.503313245Z" level=info msg="CreateContainer within sandbox \"47409e2447e6e4077f15a10fe271d1bc1294f0526e4276965472105749ff31c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2208272086383ea8eec44a1da4dfc19a0f5ce486d0515e5f30e671837cb19144\"" Jan 23 17:36:21.503854 containerd[1668]: time="2026-01-23T17:36:21.503814687Z" level=info msg="StartContainer for \"2208272086383ea8eec44a1da4dfc19a0f5ce486d0515e5f30e671837cb19144\"" Jan 23 17:36:21.505479 containerd[1668]: time="2026-01-23T17:36:21.505448055Z" level=info msg="connecting to shim 2208272086383ea8eec44a1da4dfc19a0f5ce486d0515e5f30e671837cb19144" address="unix:///run/containerd/s/1dcc55d8d4d3c45d28465ec0e863f6f4e21607eadf312b84dbcc6dde08186c94" protocol=ttrpc version=3 Jan 23 17:36:21.507965 containerd[1668]: time="2026-01-23T17:36:21.507934787Z" level=info msg="CreateContainer within sandbox \"3e15772bbbb00a65e922a4e6722f642c8569c6d92df1a13b256c2063175ffe70\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"60c40fca4387094175058b2dd81bc1fdc87835a5e9ff9c007d2a11aedcda7173\"" Jan 23 17:36:21.508563 containerd[1668]: time="2026-01-23T17:36:21.508539510Z" level=info msg="StartContainer for \"60c40fca4387094175058b2dd81bc1fdc87835a5e9ff9c007d2a11aedcda7173\"" Jan 23 17:36:21.509654 containerd[1668]: time="2026-01-23T17:36:21.509461635Z" level=info msg="connecting to shim 60c40fca4387094175058b2dd81bc1fdc87835a5e9ff9c007d2a11aedcda7173" address="unix:///run/containerd/s/84765a42794d95cafb5d6a7f2d78d655b901520979a81ccbec6e50b57cfb2435" protocol=ttrpc version=3 Jan 23 17:36:21.531953 systemd[1]: Started cri-containerd-60c40fca4387094175058b2dd81bc1fdc87835a5e9ff9c007d2a11aedcda7173.scope - libcontainer container 60c40fca4387094175058b2dd81bc1fdc87835a5e9ff9c007d2a11aedcda7173. Jan 23 17:36:21.535334 systemd[1]: Started cri-containerd-2208272086383ea8eec44a1da4dfc19a0f5ce486d0515e5f30e671837cb19144.scope - libcontainer container 2208272086383ea8eec44a1da4dfc19a0f5ce486d0515e5f30e671837cb19144. Jan 23 17:36:21.542000 audit: BPF prog-id=258 op=LOAD Jan 23 17:36:21.543000 audit: BPF prog-id=259 op=LOAD Jan 23 17:36:21.543000 audit[5690]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3135 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633430666361343338373039343137353035386232646438316263 Jan 23 17:36:21.543000 audit: BPF prog-id=259 op=UNLOAD Jan 23 17:36:21.543000 audit[5690]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633430666361343338373039343137353035386232646438316263 Jan 23 17:36:21.543000 audit: BPF prog-id=260 op=LOAD Jan 23 17:36:21.543000 audit[5690]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3135 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633430666361343338373039343137353035386232646438316263 Jan 23 17:36:21.544000 audit: BPF prog-id=261 op=LOAD Jan 23 17:36:21.544000 audit[5690]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3135 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633430666361343338373039343137353035386232646438316263 Jan 23 17:36:21.544000 audit: BPF prog-id=261 op=UNLOAD Jan 23 17:36:21.544000 audit[5690]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633430666361343338373039343137353035386232646438316263 Jan 23 17:36:21.544000 audit: BPF prog-id=260 op=UNLOAD Jan 23 17:36:21.544000 audit[5690]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3135 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633430666361343338373039343137353035386232646438316263 Jan 23 17:36:21.544000 audit: BPF prog-id=262 op=LOAD Jan 23 17:36:21.544000 audit[5690]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3135 pid=5690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633430666361343338373039343137353035386232646438316263 Jan 23 17:36:21.549000 audit: BPF prog-id=263 op=LOAD Jan 23 17:36:21.552000 audit: BPF prog-id=264 op=LOAD Jan 23 17:36:21.552000 audit[5688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2654 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303832373230383633383365613865656334346131646134646663 Jan 23 17:36:21.552000 audit: BPF prog-id=264 op=UNLOAD Jan 23 17:36:21.552000 audit[5688]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2654 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303832373230383633383365613865656334346131646134646663 Jan 23 17:36:21.553000 audit: BPF prog-id=265 op=LOAD Jan 23 17:36:21.553000 audit[5688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2654 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303832373230383633383365613865656334346131646134646663 Jan 23 17:36:21.553000 audit: BPF prog-id=266 op=LOAD Jan 23 17:36:21.553000 audit[5688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2654 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303832373230383633383365613865656334346131646134646663 Jan 23 17:36:21.553000 audit: BPF prog-id=266 op=UNLOAD Jan 23 17:36:21.553000 audit[5688]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2654 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303832373230383633383365613865656334346131646134646663 Jan 23 17:36:21.553000 audit: BPF prog-id=265 op=UNLOAD Jan 23 17:36:21.553000 audit[5688]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2654 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303832373230383633383365613865656334346131646134646663 Jan 23 17:36:21.553000 audit: BPF prog-id=267 op=LOAD Jan 23 17:36:21.553000 audit[5688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2654 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:21.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303832373230383633383365613865656334346131646134646663 Jan 23 17:36:21.569748 containerd[1668]: time="2026-01-23T17:36:21.569708211Z" level=info msg="StartContainer for \"60c40fca4387094175058b2dd81bc1fdc87835a5e9ff9c007d2a11aedcda7173\" returns successfully" Jan 23 17:36:21.587262 containerd[1668]: time="2026-01-23T17:36:21.587039416Z" level=info msg="StartContainer for \"2208272086383ea8eec44a1da4dfc19a0f5ce486d0515e5f30e671837cb19144\" returns successfully" Jan 23 17:36:21.629378 containerd[1668]: time="2026-01-23T17:36:21.629307544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:36:21.631014 containerd[1668]: time="2026-01-23T17:36:21.630942352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:36:21.631195 containerd[1668]: time="2026-01-23T17:36:21.630990552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:36:21.631315 kubelet[2951]: E0123 17:36:21.631274 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:36:21.631385 kubelet[2951]: E0123 17:36:21.631324 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:36:21.631483 kubelet[2951]: E0123 17:36:21.631443 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btn72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6686778f54-qdppv_calico-apiserver(287c72fe-ff92-46ec-9e19-273465100dda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:36:21.632629 kubelet[2951]: E0123 17:36:21.632585 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6686778f54-qdppv" podUID="287c72fe-ff92-46ec-9e19-273465100dda" Jan 23 17:36:22.089155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be-rootfs.mount: Deactivated successfully. Jan 23 17:36:22.477630 kubelet[2951]: I0123 17:36:22.477462 2951 scope.go:117] "RemoveContainer" containerID="4f81618278776b53d653aa055371f45d5eb70d70ff3d9d402bcddb944ede28be" Jan 23 17:36:22.479682 containerd[1668]: time="2026-01-23T17:36:22.479647080Z" level=info msg="CreateContainer within sandbox \"73d763fdf8dffe6ca275c903521e47e7610bc20924b29199b574923cc98c396a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 17:36:22.491064 containerd[1668]: time="2026-01-23T17:36:22.490984735Z" level=info msg="Container 1b68461f01edb04837cdcb5a818052256e4fb40dbf2568c38bf0b9ee668c359f: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:36:22.500312 containerd[1668]: time="2026-01-23T17:36:22.500269341Z" level=info msg="CreateContainer within sandbox \"73d763fdf8dffe6ca275c903521e47e7610bc20924b29199b574923cc98c396a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1b68461f01edb04837cdcb5a818052256e4fb40dbf2568c38bf0b9ee668c359f\"" Jan 23 17:36:22.501145 containerd[1668]: time="2026-01-23T17:36:22.501119305Z" level=info msg="StartContainer for \"1b68461f01edb04837cdcb5a818052256e4fb40dbf2568c38bf0b9ee668c359f\"" Jan 23 17:36:22.502237 containerd[1668]: time="2026-01-23T17:36:22.502210471Z" level=info msg="connecting to shim 1b68461f01edb04837cdcb5a818052256e4fb40dbf2568c38bf0b9ee668c359f" address="unix:///run/containerd/s/0e9764bff20e28a437df63199d0e3c3b87260e9fac32d50afa1ebab2f92af1db" protocol=ttrpc version=3 Jan 23 17:36:22.525193 systemd[1]: Started cri-containerd-1b68461f01edb04837cdcb5a818052256e4fb40dbf2568c38bf0b9ee668c359f.scope - libcontainer container 1b68461f01edb04837cdcb5a818052256e4fb40dbf2568c38bf0b9ee668c359f. Jan 23 17:36:22.536000 audit: BPF prog-id=268 op=LOAD Jan 23 17:36:22.536000 audit: BPF prog-id=269 op=LOAD Jan 23 17:36:22.536000 audit[5755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2687 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:22.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162363834363166303165646230343833376364636235613831383035 Jan 23 17:36:22.536000 audit: BPF prog-id=269 op=UNLOAD Jan 23 17:36:22.536000 audit[5755]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2687 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:22.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162363834363166303165646230343833376364636235613831383035 Jan 23 17:36:22.536000 audit: BPF prog-id=270 op=LOAD Jan 23 17:36:22.536000 audit[5755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2687 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:22.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162363834363166303165646230343833376364636235613831383035 Jan 23 17:36:22.536000 audit: BPF prog-id=271 op=LOAD Jan 23 17:36:22.536000 audit[5755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2687 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:22.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162363834363166303165646230343833376364636235613831383035 Jan 23 17:36:22.536000 audit: BPF prog-id=271 op=UNLOAD Jan 23 17:36:22.536000 audit[5755]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2687 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:22.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162363834363166303165646230343833376364636235613831383035 Jan 23 17:36:22.536000 audit: BPF prog-id=270 op=UNLOAD Jan 23 17:36:22.536000 audit[5755]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2687 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:22.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162363834363166303165646230343833376364636235613831383035 Jan 23 17:36:22.536000 audit: BPF prog-id=272 op=LOAD Jan 23 17:36:22.536000 audit[5755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2687 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:36:22.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162363834363166303165646230343833376364636235613831383035 Jan 23 17:36:22.561981 containerd[1668]: time="2026-01-23T17:36:22.561941484Z" level=info msg="StartContainer for \"1b68461f01edb04837cdcb5a818052256e4fb40dbf2568c38bf0b9ee668c359f\" returns successfully" Jan 23 17:36:22.949896 kubelet[2951]: E0123 17:36:22.949848 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f66fb8c4-942js" podUID="e9193ceb-0e99-470a-bb0b-413b40f3616d" Jan 23 17:36:24.948891 containerd[1668]: time="2026-01-23T17:36:24.948805806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:36:25.308570 containerd[1668]: time="2026-01-23T17:36:25.308308172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:36:25.309680 containerd[1668]: time="2026-01-23T17:36:25.309641778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:36:25.309855 containerd[1668]: time="2026-01-23T17:36:25.309655139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:36:25.310120 kubelet[2951]: E0123 17:36:25.310035 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:36:25.310393 kubelet[2951]: E0123 17:36:25.310120 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:36:25.310393 kubelet[2951]: E0123 17:36:25.310355 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:36:25.312568 containerd[1668]: time="2026-01-23T17:36:25.312211311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:36:25.632282 containerd[1668]: time="2026-01-23T17:36:25.632074282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:36:25.633481 containerd[1668]: time="2026-01-23T17:36:25.633438329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:36:25.633910 containerd[1668]: time="2026-01-23T17:36:25.633517449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:36:25.633950 kubelet[2951]: E0123 17:36:25.633638 2951 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:36:25.633950 kubelet[2951]: E0123 17:36:25.633683 2951 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:36:25.633950 kubelet[2951]: E0123 17:36:25.633815 2951 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gfmmc_calico-system(decbe37e-6413-4ebb-af5d-fd959613c007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:36:25.635092 kubelet[2951]: E0123 17:36:25.634987 2951 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gfmmc" podUID="decbe37e-6413-4ebb-af5d-fd959613c007"