Dec 12 17:24:01.411826 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:24:01.411863 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:24:01.411874 kernel: KASLR enabled Dec 12 17:24:01.411880 kernel: efi: EFI v2.7 by EDK II Dec 12 17:24:01.411886 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438351218 Dec 12 17:24:01.411891 kernel: random: crng init done Dec 12 17:24:01.411898 kernel: secureboot: Secure boot disabled Dec 12 17:24:01.411904 kernel: ACPI: Early table checksum verification disabled Dec 12 17:24:01.411910 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Dec 12 17:24:01.411917 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:24:01.411924 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411930 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411935 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411942 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411951 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411957 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411964 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411970 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411977 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411983 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:01.411989 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Dec 12 17:24:01.411996 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 12 17:24:01.412002 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:24:01.412010 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Dec 12 17:24:01.412016 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Dec 12 17:24:01.412022 kernel: Zone ranges: Dec 12 17:24:01.412029 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 12 17:24:01.412035 kernel: DMA32 empty Dec 12 17:24:01.412041 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Dec 12 17:24:01.412047 kernel: Device empty Dec 12 17:24:01.412054 kernel: Movable zone start for each node Dec 12 17:24:01.412060 kernel: Early memory node ranges Dec 12 17:24:01.412066 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Dec 12 17:24:01.412073 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Dec 12 17:24:01.412079 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Dec 12 17:24:01.412086 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Dec 12 17:24:01.412093 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Dec 12 17:24:01.412099 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Dec 12 17:24:01.412105 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Dec 12 17:24:01.412114 kernel: psci: probing for conduit method from ACPI. Dec 12 17:24:01.412126 kernel: psci: PSCIv1.3 detected in firmware. Dec 12 17:24:01.412134 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:24:01.412141 kernel: psci: Trusted OS migration not required Dec 12 17:24:01.412147 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:24:01.412154 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:24:01.412161 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 12 17:24:01.412168 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 12 17:24:01.412175 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Dec 12 17:24:01.412182 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Dec 12 17:24:01.412190 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:24:01.412197 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:24:01.412204 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:24:01.412210 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:24:01.412217 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:24:01.412224 kernel: CPU features: detected: Spectre-v4 Dec 12 17:24:01.412231 kernel: CPU features: detected: Spectre-BHB Dec 12 17:24:01.412238 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:24:01.412244 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:24:01.412251 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:24:01.412258 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:24:01.412266 kernel: alternatives: applying boot alternatives Dec 12 17:24:01.412274 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:24:01.412281 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 12 17:24:01.412288 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 12 17:24:01.412295 kernel: Fallback order for Node 0: 0 Dec 12 17:24:01.412301 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Dec 12 17:24:01.412308 kernel: Policy zone: Normal Dec 12 17:24:01.412315 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:24:01.412322 kernel: software IO TLB: area num 4. Dec 12 17:24:01.412328 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Dec 12 17:24:01.412337 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:24:01.412343 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:24:01.412351 kernel: rcu: RCU event tracing is enabled. Dec 12 17:24:01.412358 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:24:01.412365 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:24:01.412372 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:24:01.412379 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:24:01.412386 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:24:01.412393 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:24:01.412400 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:24:01.412407 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:24:01.412431 kernel: GICv3: 256 SPIs implemented Dec 12 17:24:01.412438 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:24:01.412445 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:24:01.412452 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:24:01.412459 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:24:01.412465 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:24:01.412472 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:24:01.412479 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:24:01.412486 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:24:01.412493 kernel: GICv3: using LPI property table @0x0000000100130000 Dec 12 17:24:01.412500 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Dec 12 17:24:01.412507 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:24:01.412515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:24:01.412522 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:24:01.412529 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:24:01.412536 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:24:01.412542 kernel: arm-pv: using stolen time PV Dec 12 17:24:01.412550 kernel: Console: colour dummy device 80x25 Dec 12 17:24:01.412557 kernel: ACPI: Core revision 20240827 Dec 12 17:24:01.412565 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:24:01.412574 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:24:01.412581 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:24:01.412588 kernel: landlock: Up and running. Dec 12 17:24:01.412595 kernel: SELinux: Initializing. Dec 12 17:24:01.412602 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:24:01.412609 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:24:01.412616 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:24:01.412623 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:24:01.412632 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:24:01.412639 kernel: Remapping and enabling EFI services. Dec 12 17:24:01.412646 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:24:01.412653 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:24:01.412660 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:24:01.412667 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Dec 12 17:24:01.412675 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:24:01.412683 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:24:01.412690 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:24:01.412702 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:24:01.412711 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Dec 12 17:24:01.412718 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:24:01.412726 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:24:01.412733 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:24:01.412741 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:24:01.412749 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Dec 12 17:24:01.412757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:24:01.412764 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:24:01.412771 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:24:01.412779 kernel: SMP: Total of 4 processors activated. Dec 12 17:24:01.412786 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:24:01.412795 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:24:01.412803 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:24:01.412810 kernel: CPU features: detected: Common not Private translations Dec 12 17:24:01.412818 kernel: CPU features: detected: CRC32 instructions Dec 12 17:24:01.412825 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:24:01.412840 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:24:01.412848 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:24:01.412857 kernel: CPU features: detected: Privileged Access Never Dec 12 17:24:01.412865 kernel: CPU features: detected: RAS Extension Support Dec 12 17:24:01.412872 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:24:01.412879 kernel: alternatives: applying system-wide alternatives Dec 12 17:24:01.412887 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:24:01.412895 kernel: Memory: 16324496K/16777216K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 429936K reserved, 16384K cma-reserved) Dec 12 17:24:01.412902 kernel: devtmpfs: initialized Dec 12 17:24:01.412911 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:24:01.412919 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:24:01.412926 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:24:01.412934 kernel: 0 pages in range for non-PLT usage Dec 12 17:24:01.412941 kernel: 515184 pages in range for PLT usage Dec 12 17:24:01.412948 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:24:01.412956 kernel: SMBIOS 3.0.0 present. Dec 12 17:24:01.412963 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 12 17:24:01.412972 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:24:01.412979 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:24:01.412987 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:24:01.412994 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:24:01.413002 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:24:01.413009 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:24:01.413017 kernel: audit: type=2000 audit(0.039:1): state=initialized audit_enabled=0 res=1 Dec 12 17:24:01.413025 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:24:01.413033 kernel: cpuidle: using governor menu Dec 12 17:24:01.413040 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:24:01.413048 kernel: ASID allocator initialised with 32768 entries Dec 12 17:24:01.413055 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:24:01.413062 kernel: Serial: AMBA PL011 UART driver Dec 12 17:24:01.413070 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:24:01.413079 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:24:01.413086 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:24:01.413094 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:24:01.413101 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:24:01.413108 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:24:01.413116 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:24:01.413123 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:24:01.413132 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:24:01.413140 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:24:01.413147 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:24:01.413154 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:24:01.413162 kernel: ACPI: Interpreter enabled Dec 12 17:24:01.413169 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:24:01.413177 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:24:01.413184 kernel: ACPI: CPU0 has been hot-added Dec 12 17:24:01.413193 kernel: ACPI: CPU1 has been hot-added Dec 12 17:24:01.413200 kernel: ACPI: CPU2 has been hot-added Dec 12 17:24:01.413208 kernel: ACPI: CPU3 has been hot-added Dec 12 17:24:01.413215 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:24:01.413223 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:24:01.413230 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:24:01.413395 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:24:01.413484 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:24:01.413564 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:24:01.413643 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:24:01.413721 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:24:01.413731 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:24:01.413739 kernel: PCI host bridge to bus 0000:00 Dec 12 17:24:01.413823 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:24:01.413911 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:24:01.414000 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:24:01.414076 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:24:01.414173 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:24:01.414266 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.414352 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Dec 12 17:24:01.414433 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 12 17:24:01.414515 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Dec 12 17:24:01.414594 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Dec 12 17:24:01.414682 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.414763 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Dec 12 17:24:01.414851 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 12 17:24:01.414934 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Dec 12 17:24:01.415019 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.415098 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Dec 12 17:24:01.415188 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 12 17:24:01.415267 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Dec 12 17:24:01.415345 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Dec 12 17:24:01.415429 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.415507 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Dec 12 17:24:01.415584 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 12 17:24:01.415665 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Dec 12 17:24:01.415750 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.415829 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Dec 12 17:24:01.416995 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 12 17:24:01.417079 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Dec 12 17:24:01.417174 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Dec 12 17:24:01.417276 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.417357 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Dec 12 17:24:01.417435 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 12 17:24:01.417513 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Dec 12 17:24:01.417592 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Dec 12 17:24:01.417683 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.417766 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Dec 12 17:24:01.418935 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 12 17:24:01.419207 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.419380 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Dec 12 17:24:01.419467 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 12 17:24:01.419558 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.419649 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Dec 12 17:24:01.419730 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 12 17:24:01.419818 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.419927 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Dec 12 17:24:01.420015 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 12 17:24:01.420107 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.420207 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Dec 12 17:24:01.420288 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 12 17:24:01.420389 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.420484 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Dec 12 17:24:01.420570 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 12 17:24:01.420657 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.420737 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Dec 12 17:24:01.420815 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 12 17:24:01.420914 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.420996 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Dec 12 17:24:01.421076 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 12 17:24:01.421168 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.421249 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Dec 12 17:24:01.421328 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 12 17:24:01.421414 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.421496 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Dec 12 17:24:01.421576 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 12 17:24:01.421688 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.421770 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Dec 12 17:24:01.421856 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 12 17:24:01.421952 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.422037 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Dec 12 17:24:01.422117 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 12 17:24:01.422196 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Dec 12 17:24:01.422276 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Dec 12 17:24:01.422372 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.422457 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Dec 12 17:24:01.422540 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 12 17:24:01.422619 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Dec 12 17:24:01.422700 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Dec 12 17:24:01.422789 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.422939 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Dec 12 17:24:01.423028 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 12 17:24:01.423128 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Dec 12 17:24:01.423206 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Dec 12 17:24:01.423296 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.423379 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Dec 12 17:24:01.423461 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 12 17:24:01.423542 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Dec 12 17:24:01.423625 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Dec 12 17:24:01.423714 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.423793 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Dec 12 17:24:01.423888 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 12 17:24:01.423978 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Dec 12 17:24:01.424067 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Dec 12 17:24:01.424161 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.424240 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Dec 12 17:24:01.424324 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 12 17:24:01.424406 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Dec 12 17:24:01.424506 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Dec 12 17:24:01.424596 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.424698 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Dec 12 17:24:01.424777 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 12 17:24:01.424874 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Dec 12 17:24:01.424991 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Dec 12 17:24:01.425087 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.425169 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Dec 12 17:24:01.425252 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 12 17:24:01.425335 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Dec 12 17:24:01.425417 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Dec 12 17:24:01.425503 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.425582 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Dec 12 17:24:01.425663 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 12 17:24:01.425740 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Dec 12 17:24:01.425818 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Dec 12 17:24:01.425924 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.426007 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Dec 12 17:24:01.426093 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 12 17:24:01.426174 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Dec 12 17:24:01.426252 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Dec 12 17:24:01.426345 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.426428 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Dec 12 17:24:01.426508 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 12 17:24:01.426585 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Dec 12 17:24:01.426667 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Dec 12 17:24:01.426764 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.426862 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Dec 12 17:24:01.426949 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 12 17:24:01.427035 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Dec 12 17:24:01.427117 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Dec 12 17:24:01.427204 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.427290 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Dec 12 17:24:01.427370 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 12 17:24:01.427449 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Dec 12 17:24:01.427531 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Dec 12 17:24:01.427617 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.427697 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Dec 12 17:24:01.427775 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 12 17:24:01.427869 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Dec 12 17:24:01.427949 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Dec 12 17:24:01.428046 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.428128 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Dec 12 17:24:01.428209 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 12 17:24:01.428287 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Dec 12 17:24:01.428365 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Dec 12 17:24:01.428464 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:01.428549 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Dec 12 17:24:01.428628 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 12 17:24:01.428705 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Dec 12 17:24:01.428783 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Dec 12 17:24:01.428883 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 12 17:24:01.428971 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Dec 12 17:24:01.429052 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:24:01.429133 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 12 17:24:01.429221 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 12 17:24:01.429308 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Dec 12 17:24:01.429406 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Dec 12 17:24:01.429493 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Dec 12 17:24:01.429582 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Dec 12 17:24:01.429677 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 12 17:24:01.429759 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Dec 12 17:24:01.429879 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 12 17:24:01.429976 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Dec 12 17:24:01.430066 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Dec 12 17:24:01.430160 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Dec 12 17:24:01.430242 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Dec 12 17:24:01.430323 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Dec 12 17:24:01.430406 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 12 17:24:01.430490 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 12 17:24:01.430573 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 12 17:24:01.430661 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 12 17:24:01.430741 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 12 17:24:01.430822 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 12 17:24:01.430918 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 12 17:24:01.430998 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 12 17:24:01.431076 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 12 17:24:01.431158 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 12 17:24:01.431240 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 12 17:24:01.431319 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 12 17:24:01.431405 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 12 17:24:01.431485 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 12 17:24:01.431562 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Dec 12 17:24:01.431649 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 12 17:24:01.431731 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 12 17:24:01.431812 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 12 17:24:01.431952 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 12 17:24:01.432035 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Dec 12 17:24:01.432121 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Dec 12 17:24:01.432207 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 12 17:24:01.432290 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 12 17:24:01.432369 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 12 17:24:01.432467 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 12 17:24:01.432548 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 12 17:24:01.432627 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 12 17:24:01.432718 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 12 17:24:01.432798 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Dec 12 17:24:01.432900 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Dec 12 17:24:01.432987 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Dec 12 17:24:01.433066 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 12 17:24:01.433144 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Dec 12 17:24:01.433235 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Dec 12 17:24:01.433317 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Dec 12 17:24:01.433395 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Dec 12 17:24:01.433478 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Dec 12 17:24:01.433557 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Dec 12 17:24:01.433635 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Dec 12 17:24:01.433720 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 12 17:24:01.433799 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Dec 12 17:24:01.433898 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Dec 12 17:24:01.434012 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 12 17:24:01.434101 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Dec 12 17:24:01.434188 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Dec 12 17:24:01.434273 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 12 17:24:01.434355 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Dec 12 17:24:01.434434 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Dec 12 17:24:01.434518 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 12 17:24:01.434603 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Dec 12 17:24:01.434691 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Dec 12 17:24:01.434775 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 12 17:24:01.434876 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Dec 12 17:24:01.434962 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Dec 12 17:24:01.435044 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Dec 12 17:24:01.435126 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Dec 12 17:24:01.435205 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Dec 12 17:24:01.435286 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Dec 12 17:24:01.435364 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Dec 12 17:24:01.435444 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Dec 12 17:24:01.435534 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Dec 12 17:24:01.435621 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Dec 12 17:24:01.435703 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Dec 12 17:24:01.435785 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 12 17:24:01.435881 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Dec 12 17:24:01.435962 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Dec 12 17:24:01.436073 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 12 17:24:01.436159 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Dec 12 17:24:01.436238 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Dec 12 17:24:01.436320 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 12 17:24:01.436399 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Dec 12 17:24:01.436495 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Dec 12 17:24:01.436583 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 12 17:24:01.436669 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Dec 12 17:24:01.436750 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Dec 12 17:24:01.436842 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 12 17:24:01.436924 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Dec 12 17:24:01.437003 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Dec 12 17:24:01.437093 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Dec 12 17:24:01.437178 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Dec 12 17:24:01.437257 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Dec 12 17:24:01.437339 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Dec 12 17:24:01.437419 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Dec 12 17:24:01.437501 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Dec 12 17:24:01.437582 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 12 17:24:01.437662 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Dec 12 17:24:01.437742 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Dec 12 17:24:01.437824 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 12 17:24:01.437952 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Dec 12 17:24:01.438036 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Dec 12 17:24:01.438131 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 12 17:24:01.438215 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Dec 12 17:24:01.438299 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Dec 12 17:24:01.438385 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 12 17:24:01.438468 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Dec 12 17:24:01.438553 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Dec 12 17:24:01.438653 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 12 17:24:01.438736 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Dec 12 17:24:01.438821 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Dec 12 17:24:01.438915 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Dec 12 17:24:01.438999 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Dec 12 17:24:01.439082 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Dec 12 17:24:01.439162 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Dec 12 17:24:01.439246 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Dec 12 17:24:01.439326 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Dec 12 17:24:01.439410 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Dec 12 17:24:01.439492 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Dec 12 17:24:01.439577 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Dec 12 17:24:01.439657 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Dec 12 17:24:01.439739 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Dec 12 17:24:01.439818 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Dec 12 17:24:01.439908 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Dec 12 17:24:01.439989 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Dec 12 17:24:01.440080 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Dec 12 17:24:01.440164 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Dec 12 17:24:01.440249 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Dec 12 17:24:01.440332 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Dec 12 17:24:01.440558 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Dec 12 17:24:01.440663 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Dec 12 17:24:01.440768 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Dec 12 17:24:01.440870 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Dec 12 17:24:01.440955 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Dec 12 17:24:01.441035 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Dec 12 17:24:01.441118 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Dec 12 17:24:01.441197 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Dec 12 17:24:01.441278 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Dec 12 17:24:01.441362 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Dec 12 17:24:01.441450 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Dec 12 17:24:01.441533 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Dec 12 17:24:01.441626 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Dec 12 17:24:01.441706 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Dec 12 17:24:01.441788 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Dec 12 17:24:01.441905 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Dec 12 17:24:01.441995 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Dec 12 17:24:01.442078 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Dec 12 17:24:01.442179 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Dec 12 17:24:01.442259 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Dec 12 17:24:01.442341 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Dec 12 17:24:01.442430 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Dec 12 17:24:01.442512 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Dec 12 17:24:01.442590 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Dec 12 17:24:01.442672 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Dec 12 17:24:01.442752 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Dec 12 17:24:01.442854 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Dec 12 17:24:01.442940 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Dec 12 17:24:01.443028 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Dec 12 17:24:01.443113 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Dec 12 17:24:01.443197 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Dec 12 17:24:01.443284 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Dec 12 17:24:01.443368 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Dec 12 17:24:01.443452 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Dec 12 17:24:01.443538 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Dec 12 17:24:01.443618 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Dec 12 17:24:01.443700 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Dec 12 17:24:01.443779 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Dec 12 17:24:01.443874 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Dec 12 17:24:01.443959 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Dec 12 17:24:01.444051 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Dec 12 17:24:01.444132 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Dec 12 17:24:01.444222 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Dec 12 17:24:01.444303 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Dec 12 17:24:01.444390 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Dec 12 17:24:01.444486 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Dec 12 17:24:01.444571 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Dec 12 17:24:01.444654 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Dec 12 17:24:01.444736 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Dec 12 17:24:01.444818 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 17:24:01.444912 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Dec 12 17:24:01.444997 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Dec 12 17:24:01.445081 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Dec 12 17:24:01.445180 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Dec 12 17:24:01.445270 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Dec 12 17:24:01.445352 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Dec 12 17:24:01.445435 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Dec 12 17:24:01.445525 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 17:24:01.445606 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Dec 12 17:24:01.445687 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Dec 12 17:24:01.445772 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Dec 12 17:24:01.445871 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Dec 12 17:24:01.445954 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Dec 12 17:24:01.446033 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Dec 12 17:24:01.446122 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Dec 12 17:24:01.446205 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Dec 12 17:24:01.446292 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Dec 12 17:24:01.446370 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Dec 12 17:24:01.446449 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Dec 12 17:24:01.446528 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Dec 12 17:24:01.446608 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Dec 12 17:24:01.446705 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Dec 12 17:24:01.446788 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Dec 12 17:24:01.446881 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Dec 12 17:24:01.446964 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Dec 12 17:24:01.447043 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Dec 12 17:24:01.447124 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Dec 12 17:24:01.447212 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Dec 12 17:24:01.447293 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Dec 12 17:24:01.447372 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.447455 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.447536 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Dec 12 17:24:01.447614 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.447694 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.447774 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Dec 12 17:24:01.447861 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.447941 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.448020 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Dec 12 17:24:01.448099 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.448179 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.448268 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Dec 12 17:24:01.448350 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.448440 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.448528 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Dec 12 17:24:01.448608 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.448686 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.448769 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Dec 12 17:24:01.448864 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.448947 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.449026 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Dec 12 17:24:01.449105 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.449182 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.449265 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Dec 12 17:24:01.449344 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.449422 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.449502 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Dec 12 17:24:01.449580 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.449666 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.449752 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Dec 12 17:24:01.449844 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.449931 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.450012 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Dec 12 17:24:01.450091 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.450175 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.450255 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Dec 12 17:24:01.450337 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.450416 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.450495 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Dec 12 17:24:01.450572 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.450654 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.450736 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Dec 12 17:24:01.450817 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.450904 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.450984 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Dec 12 17:24:01.451062 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.451141 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.451221 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Dec 12 17:24:01.451299 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.451380 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.451459 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Dec 12 17:24:01.451538 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.451616 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.451695 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 17:24:01.451774 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Dec 12 17:24:01.451865 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Dec 12 17:24:01.451951 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Dec 12 17:24:01.452036 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 17:24:01.452143 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Dec 12 17:24:01.452226 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Dec 12 17:24:01.452327 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Dec 12 17:24:01.452409 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Dec 12 17:24:01.452517 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Dec 12 17:24:01.452606 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Dec 12 17:24:01.452687 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Dec 12 17:24:01.452775 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Dec 12 17:24:01.452874 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Dec 12 17:24:01.452956 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Dec 12 17:24:01.453036 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.453123 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.453208 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.453288 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.453374 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.453453 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.453541 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.453620 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.453703 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.453789 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.453883 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.453964 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.454044 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.454123 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.454202 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.454288 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.454368 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.454452 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.454539 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.454621 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.454703 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.454786 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.454877 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.454959 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.455040 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.455118 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.455199 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.455287 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.455369 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.455449 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.455530 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.455609 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.455692 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.455773 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.455861 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:24:01.455941 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:24:01.456029 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Dec 12 17:24:01.456116 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:24:01.456200 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Dec 12 17:24:01.456297 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 12 17:24:01.456378 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Dec 12 17:24:01.456481 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 12 17:24:01.456688 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Dec 12 17:24:01.456796 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 12 17:24:01.456914 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Dec 12 17:24:01.456998 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 12 17:24:01.457087 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Dec 12 17:24:01.457169 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Dec 12 17:24:01.457250 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 12 17:24:01.457329 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Dec 12 17:24:01.457419 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 12 17:24:01.457508 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Dec 12 17:24:01.457589 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 12 17:24:01.457672 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Dec 12 17:24:01.457751 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 12 17:24:01.457850 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Dec 12 17:24:01.457938 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Dec 12 17:24:01.458019 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 12 17:24:01.458100 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Dec 12 17:24:01.458179 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 12 17:24:01.458265 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Dec 12 17:24:01.458349 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Dec 12 17:24:01.458439 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 12 17:24:01.458521 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 12 17:24:01.458600 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 12 17:24:01.458687 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 12 17:24:01.458770 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 12 17:24:01.458860 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 12 17:24:01.458949 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 12 17:24:01.459047 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 12 17:24:01.459127 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 12 17:24:01.459210 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 12 17:24:01.459289 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Dec 12 17:24:01.459370 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 12 17:24:01.459455 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 12 17:24:01.459535 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Dec 12 17:24:01.459621 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Dec 12 17:24:01.459706 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 12 17:24:01.459786 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Dec 12 17:24:01.459880 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Dec 12 17:24:01.459971 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 12 17:24:01.460055 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Dec 12 17:24:01.460139 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Dec 12 17:24:01.460218 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 12 17:24:01.460300 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Dec 12 17:24:01.460382 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Dec 12 17:24:01.460479 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 12 17:24:01.460579 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Dec 12 17:24:01.460662 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 12 17:24:01.460878 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 12 17:24:01.460963 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Dec 12 17:24:01.461045 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 12 17:24:01.461127 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 12 17:24:01.461210 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Dec 12 17:24:01.461291 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 12 17:24:01.461376 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 12 17:24:01.461466 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Dec 12 17:24:01.461546 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Dec 12 17:24:01.461630 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 12 17:24:01.461717 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Dec 12 17:24:01.461802 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Dec 12 17:24:01.461898 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 12 17:24:01.461981 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Dec 12 17:24:01.462063 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Dec 12 17:24:01.462141 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Dec 12 17:24:01.462221 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 12 17:24:01.462300 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Dec 12 17:24:01.462383 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Dec 12 17:24:01.462462 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Dec 12 17:24:01.462542 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 12 17:24:01.462622 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Dec 12 17:24:01.462703 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Dec 12 17:24:01.462788 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Dec 12 17:24:01.462884 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 12 17:24:01.462970 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Dec 12 17:24:01.463050 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Dec 12 17:24:01.463131 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 12 17:24:01.463216 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 12 17:24:01.463298 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Dec 12 17:24:01.463375 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Dec 12 17:24:01.463461 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 12 17:24:01.463546 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 12 17:24:01.463630 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Dec 12 17:24:01.463709 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Dec 12 17:24:01.463788 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 12 17:24:01.463880 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 12 17:24:01.463961 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Dec 12 17:24:01.464042 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Dec 12 17:24:01.464121 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Dec 12 17:24:01.464202 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 12 17:24:01.464288 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Dec 12 17:24:01.464376 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Dec 12 17:24:01.464474 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Dec 12 17:24:01.464556 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 12 17:24:01.464640 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Dec 12 17:24:01.464727 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Dec 12 17:24:01.464813 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Dec 12 17:24:01.464906 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 12 17:24:01.464995 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Dec 12 17:24:01.465080 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Dec 12 17:24:01.465160 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Dec 12 17:24:01.465246 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 12 17:24:01.465331 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Dec 12 17:24:01.465414 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Dec 12 17:24:01.465494 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Dec 12 17:24:01.465583 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 12 17:24:01.465665 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Dec 12 17:24:01.465748 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Dec 12 17:24:01.465828 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 12 17:24:01.465955 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 12 17:24:01.466046 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Dec 12 17:24:01.466131 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Dec 12 17:24:01.466212 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 12 17:24:01.466294 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 12 17:24:01.466377 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Dec 12 17:24:01.466463 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Dec 12 17:24:01.466541 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 12 17:24:01.466626 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 12 17:24:01.466706 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Dec 12 17:24:01.466786 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Dec 12 17:24:01.466877 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Dec 12 17:24:01.466961 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:24:01.467034 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:24:01.467106 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:24:01.467190 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 12 17:24:01.467264 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 12 17:24:01.467347 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 12 17:24:01.467420 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 12 17:24:01.467501 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 12 17:24:01.467574 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 12 17:24:01.467654 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 12 17:24:01.467736 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 12 17:24:01.467817 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 12 17:24:01.467915 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 12 17:24:01.468007 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 12 17:24:01.468081 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 12 17:24:01.468166 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 12 17:24:01.468242 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 12 17:24:01.468332 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 12 17:24:01.468421 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 12 17:24:01.468505 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 12 17:24:01.468581 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 12 17:24:01.468665 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Dec 12 17:24:01.468748 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Dec 12 17:24:01.468828 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Dec 12 17:24:01.468917 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Dec 12 17:24:01.469004 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Dec 12 17:24:01.469087 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Dec 12 17:24:01.469169 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Dec 12 17:24:01.469243 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Dec 12 17:24:01.469327 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Dec 12 17:24:01.469401 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 12 17:24:01.469490 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Dec 12 17:24:01.469566 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 12 17:24:01.469645 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Dec 12 17:24:01.469718 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 12 17:24:01.469798 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Dec 12 17:24:01.469906 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Dec 12 17:24:01.469989 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Dec 12 17:24:01.470063 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Dec 12 17:24:01.470146 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Dec 12 17:24:01.470222 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Dec 12 17:24:01.470299 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Dec 12 17:24:01.470384 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Dec 12 17:24:01.470459 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Dec 12 17:24:01.470551 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Dec 12 17:24:01.470644 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Dec 12 17:24:01.470721 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Dec 12 17:24:01.470804 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Dec 12 17:24:01.470898 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Dec 12 17:24:01.470978 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Dec 12 17:24:01.471053 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 12 17:24:01.471138 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Dec 12 17:24:01.471218 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Dec 12 17:24:01.471292 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 12 17:24:01.471375 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Dec 12 17:24:01.471449 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Dec 12 17:24:01.471522 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 12 17:24:01.471610 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Dec 12 17:24:01.471689 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Dec 12 17:24:01.471766 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Dec 12 17:24:01.471873 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Dec 12 17:24:01.471953 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Dec 12 17:24:01.472027 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Dec 12 17:24:01.472110 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 12 17:24:01.472186 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Dec 12 17:24:01.472260 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Dec 12 17:24:01.472340 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Dec 12 17:24:01.472464 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Dec 12 17:24:01.472563 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Dec 12 17:24:01.472652 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Dec 12 17:24:01.472728 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Dec 12 17:24:01.472801 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Dec 12 17:24:01.472905 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Dec 12 17:24:01.472982 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Dec 12 17:24:01.473055 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 12 17:24:01.473137 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Dec 12 17:24:01.473211 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Dec 12 17:24:01.473284 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 12 17:24:01.473369 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Dec 12 17:24:01.473450 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Dec 12 17:24:01.473526 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 12 17:24:01.473605 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Dec 12 17:24:01.473679 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Dec 12 17:24:01.473752 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Dec 12 17:24:01.473786 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:24:01.473794 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:24:01.473802 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:24:01.473813 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:24:01.473821 kernel: iommu: Default domain type: Translated Dec 12 17:24:01.473829 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:24:01.473843 kernel: efivars: Registered efivars operations Dec 12 17:24:01.473851 kernel: vgaarb: loaded Dec 12 17:24:01.473859 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:24:01.473867 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:24:01.473877 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:24:01.473885 kernel: pnp: PnP ACPI init Dec 12 17:24:01.473989 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:24:01.474002 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:24:01.474010 kernel: NET: Registered PF_INET protocol family Dec 12 17:24:01.474018 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:24:01.474028 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 12 17:24:01.474037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:24:01.474045 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 17:24:01.474053 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 12 17:24:01.474060 kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 12 17:24:01.474068 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 12 17:24:01.474076 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 12 17:24:01.474086 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:24:01.474176 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 12 17:24:01.474188 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:24:01.474196 kernel: kvm [1]: HYP mode not available Dec 12 17:24:01.474205 kernel: Initialise system trusted keyrings Dec 12 17:24:01.474212 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Dec 12 17:24:01.474220 kernel: Key type asymmetric registered Dec 12 17:24:01.474230 kernel: Asymmetric key parser 'x509' registered Dec 12 17:24:01.474238 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:24:01.474246 kernel: io scheduler mq-deadline registered Dec 12 17:24:01.474253 kernel: io scheduler kyber registered Dec 12 17:24:01.474261 kernel: io scheduler bfq registered Dec 12 17:24:01.474270 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:24:01.474352 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Dec 12 17:24:01.474440 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Dec 12 17:24:01.474527 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.474610 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Dec 12 17:24:01.474689 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Dec 12 17:24:01.474768 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.474863 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Dec 12 17:24:01.474950 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Dec 12 17:24:01.475036 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.475117 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Dec 12 17:24:01.475196 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Dec 12 17:24:01.475274 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.475353 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Dec 12 17:24:01.475437 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Dec 12 17:24:01.475532 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.475624 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Dec 12 17:24:01.475705 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Dec 12 17:24:01.475783 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.475872 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Dec 12 17:24:01.475952 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Dec 12 17:24:01.476030 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.476118 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Dec 12 17:24:01.476207 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Dec 12 17:24:01.476285 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.476297 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 12 17:24:01.476374 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Dec 12 17:24:01.476467 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Dec 12 17:24:01.476550 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.476635 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Dec 12 17:24:01.476715 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Dec 12 17:24:01.476793 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.476906 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Dec 12 17:24:01.476988 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Dec 12 17:24:01.477070 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.477154 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Dec 12 17:24:01.477239 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Dec 12 17:24:01.477319 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.477404 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Dec 12 17:24:01.477490 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Dec 12 17:24:01.477570 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.477654 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Dec 12 17:24:01.477735 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Dec 12 17:24:01.477815 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.477906 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Dec 12 17:24:01.477988 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Dec 12 17:24:01.478071 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.478171 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Dec 12 17:24:01.478253 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Dec 12 17:24:01.478333 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.478344 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 12 17:24:01.478422 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Dec 12 17:24:01.478502 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Dec 12 17:24:01.478582 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.478663 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Dec 12 17:24:01.478741 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Dec 12 17:24:01.478825 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.478924 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Dec 12 17:24:01.479004 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Dec 12 17:24:01.479082 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.479168 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Dec 12 17:24:01.479257 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Dec 12 17:24:01.479339 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.479430 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Dec 12 17:24:01.479514 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Dec 12 17:24:01.479596 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.479680 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Dec 12 17:24:01.479759 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Dec 12 17:24:01.479872 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.479955 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Dec 12 17:24:01.480034 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Dec 12 17:24:01.480113 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.480198 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Dec 12 17:24:01.480277 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Dec 12 17:24:01.480356 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.480367 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 12 17:24:01.480458 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Dec 12 17:24:01.480548 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Dec 12 17:24:01.480627 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.480713 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Dec 12 17:24:01.480793 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Dec 12 17:24:01.480893 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.480976 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Dec 12 17:24:01.481055 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Dec 12 17:24:01.481133 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.481218 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Dec 12 17:24:01.481297 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Dec 12 17:24:01.481376 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.481456 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Dec 12 17:24:01.481541 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Dec 12 17:24:01.481625 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.481710 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Dec 12 17:24:01.481795 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Dec 12 17:24:01.481889 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.481971 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Dec 12 17:24:01.482051 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Dec 12 17:24:01.482129 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.482213 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Dec 12 17:24:01.482297 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Dec 12 17:24:01.482377 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.482462 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Dec 12 17:24:01.482548 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Dec 12 17:24:01.482628 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:01.482639 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:24:01.482649 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:24:01.482735 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Dec 12 17:24:01.482821 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 12 17:24:01.482840 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:24:01.482849 kernel: thunder_xcv, ver 1.0 Dec 12 17:24:01.482856 kernel: thunder_bgx, ver 1.0 Dec 12 17:24:01.482864 kernel: nicpf, ver 1.0 Dec 12 17:24:01.482874 kernel: nicvf, ver 1.0 Dec 12 17:24:01.482978 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:24:01.483055 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:24:00 UTC (1765560240) Dec 12 17:24:01.483066 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:24:01.483074 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:24:01.483082 kernel: watchdog: NMI not fully supported Dec 12 17:24:01.483093 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:24:01.483101 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:24:01.483109 kernel: Segment Routing with IPv6 Dec 12 17:24:01.483116 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:24:01.483124 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:24:01.483132 kernel: Key type dns_resolver registered Dec 12 17:24:01.483140 kernel: registered taskstats version 1 Dec 12 17:24:01.483150 kernel: Loading compiled-in X.509 certificates Dec 12 17:24:01.483159 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:24:01.483166 kernel: Demotion targets for Node 0: null Dec 12 17:24:01.483174 kernel: Key type .fscrypt registered Dec 12 17:24:01.483182 kernel: Key type fscrypt-provisioning registered Dec 12 17:24:01.483190 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:24:01.483198 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:24:01.483205 kernel: ima: No architecture policies found Dec 12 17:24:01.483215 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:24:01.483223 kernel: clk: Disabling unused clocks Dec 12 17:24:01.483231 kernel: PM: genpd: Disabling unused power domains Dec 12 17:24:01.483239 kernel: Freeing unused kernel memory: 12416K Dec 12 17:24:01.483247 kernel: Run /init as init process Dec 12 17:24:01.483255 kernel: with arguments: Dec 12 17:24:01.483263 kernel: /init Dec 12 17:24:01.483272 kernel: with environment: Dec 12 17:24:01.483279 kernel: HOME=/ Dec 12 17:24:01.483287 kernel: TERM=linux Dec 12 17:24:01.483294 kernel: ACPI: bus type USB registered Dec 12 17:24:01.483308 kernel: usbcore: registered new interface driver usbfs Dec 12 17:24:01.483316 kernel: usbcore: registered new interface driver hub Dec 12 17:24:01.483324 kernel: usbcore: registered new device driver usb Dec 12 17:24:01.483412 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 12 17:24:01.483495 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 12 17:24:01.483575 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 12 17:24:01.483656 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 12 17:24:01.483737 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 12 17:24:01.483817 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 12 17:24:01.483934 kernel: hub 1-0:1.0: USB hub found Dec 12 17:24:01.484054 kernel: hub 1-0:1.0: 4 ports detected Dec 12 17:24:01.484164 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 12 17:24:01.484261 kernel: hub 2-0:1.0: USB hub found Dec 12 17:24:01.484348 kernel: hub 2-0:1.0: 4 ports detected Dec 12 17:24:01.484455 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 12 17:24:01.484547 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Dec 12 17:24:01.484558 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:24:01.484567 kernel: GPT:25804799 != 104857599 Dec 12 17:24:01.484575 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:24:01.484583 kernel: GPT:25804799 != 104857599 Dec 12 17:24:01.484592 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:24:01.484602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:24:01.484610 kernel: SCSI subsystem initialized Dec 12 17:24:01.484618 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:24:01.484626 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:24:01.484635 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:24:01.484643 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:24:01.484653 kernel: raid6: neonx8 gen() 15769 MB/s Dec 12 17:24:01.484661 kernel: raid6: neonx4 gen() 15503 MB/s Dec 12 17:24:01.484669 kernel: raid6: neonx2 gen() 13381 MB/s Dec 12 17:24:01.484678 kernel: raid6: neonx1 gen() 10511 MB/s Dec 12 17:24:01.484686 kernel: raid6: int64x8 gen() 6833 MB/s Dec 12 17:24:01.484694 kernel: raid6: int64x4 gen() 7346 MB/s Dec 12 17:24:01.484702 kernel: raid6: int64x2 gen() 6099 MB/s Dec 12 17:24:01.484711 kernel: raid6: int64x1 gen() 5062 MB/s Dec 12 17:24:01.484720 kernel: raid6: using algorithm neonx8 gen() 15769 MB/s Dec 12 17:24:01.484729 kernel: raid6: .... xor() 12066 MB/s, rmw enabled Dec 12 17:24:01.484737 kernel: raid6: using neon recovery algorithm Dec 12 17:24:01.484745 kernel: xor: measuring software checksum speed Dec 12 17:24:01.484755 kernel: 8regs : 21630 MB/sec Dec 12 17:24:01.484763 kernel: 32regs : 21699 MB/sec Dec 12 17:24:01.484773 kernel: arm64_neon : 28157 MB/sec Dec 12 17:24:01.484781 kernel: xor: using function: arm64_neon (28157 MB/sec) Dec 12 17:24:01.484901 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 12 17:24:01.484914 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:24:01.484923 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (277) Dec 12 17:24:01.484932 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:24:01.484940 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:01.484951 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:24:01.484960 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:24:01.484968 kernel: loop: module loaded Dec 12 17:24:01.484976 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:24:01.484984 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:24:01.485091 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 12 17:24:01.485106 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:24:01.485118 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:24:01.485127 systemd[1]: Detected virtualization kvm. Dec 12 17:24:01.485136 systemd[1]: Detected architecture arm64. Dec 12 17:24:01.485148 systemd[1]: Running in initrd. Dec 12 17:24:01.485157 systemd[1]: No hostname configured, using default hostname. Dec 12 17:24:01.485167 systemd[1]: Hostname set to . Dec 12 17:24:01.485176 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:24:01.485184 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:24:01.485193 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:24:01.485202 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:24:01.485211 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:24:01.485222 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:24:01.485231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:24:01.485240 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:24:01.485249 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:24:01.485258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:24:01.485267 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:24:01.485277 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:24:01.485286 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:24:01.485294 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:24:01.485303 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:24:01.485312 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:24:01.485320 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:24:01.485329 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:24:01.485339 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:24:01.485347 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:24:01.485356 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:24:01.485365 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:24:01.485374 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:24:01.485382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:24:01.485392 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:24:01.485402 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:24:01.485410 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:24:01.485419 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:24:01.485428 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:24:01.485437 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:24:01.485446 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:24:01.485456 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:24:01.485465 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:24:01.485474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:01.485483 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:24:01.485493 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:24:01.485502 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:24:01.485511 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:24:01.485543 systemd-journald[417]: Collecting audit messages is enabled. Dec 12 17:24:01.485565 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:24:01.485574 kernel: Bridge firewalling registered Dec 12 17:24:01.485583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:24:01.485592 kernel: audit: type=1130 audit(1765560241.417:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.485601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:24:01.485611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:01.485621 kernel: audit: type=1130 audit(1765560241.425:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.485629 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:24:01.485639 kernel: audit: type=1130 audit(1765560241.437:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.485648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:24:01.485657 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:24:01.485667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:24:01.485676 kernel: audit: type=1130 audit(1765560241.453:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.485685 kernel: audit: type=1334 audit(1765560241.455:6): prog-id=6 op=LOAD Dec 12 17:24:01.485694 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:24:01.485703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:24:01.485712 kernel: audit: type=1130 audit(1765560241.464:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.485739 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:24:01.485750 kernel: audit: type=1130 audit(1765560241.473:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.485759 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:24:01.485769 systemd-journald[417]: Journal started Dec 12 17:24:01.485788 systemd-journald[417]: Runtime Journal (/run/log/journal/eb0b0c42d0fc408b9e777edcf73be20b) is 8M, max 319.5M, 311.5M free. Dec 12 17:24:01.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.455000 audit: BPF prog-id=6 op=LOAD Dec 12 17:24:01.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.414519 systemd-modules-load[418]: Inserted module 'br_netfilter' Dec 12 17:24:01.498555 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:24:01.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.503701 kernel: audit: type=1130 audit(1765560241.497:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.502565 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:24:01.508365 systemd-resolved[436]: Positive Trust Anchors: Dec 12 17:24:01.508396 systemd-resolved[436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:24:01.508400 systemd-resolved[436]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:24:01.508446 systemd-resolved[436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:24:01.519365 dracut-cmdline[448]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:24:01.524100 systemd-tmpfiles[459]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:24:01.529047 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:24:01.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.534035 kernel: audit: type=1130 audit(1765560241.529:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.541512 systemd-resolved[436]: Defaulting to hostname 'linux'. Dec 12 17:24:01.543388 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:24:01.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.544377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:24:01.593860 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:24:01.604874 kernel: iscsi: registered transport (tcp) Dec 12 17:24:01.619047 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:24:01.619077 kernel: QLogic iSCSI HBA Driver Dec 12 17:24:01.642903 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:24:01.662952 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:24:01.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.665533 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:24:01.710506 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:24:01.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.712911 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:24:01.714371 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:24:01.747572 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:24:01.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.748000 audit: BPF prog-id=7 op=LOAD Dec 12 17:24:01.748000 audit: BPF prog-id=8 op=LOAD Dec 12 17:24:01.750313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:24:01.780127 systemd-udevd[698]: Using default interface naming scheme 'v257'. Dec 12 17:24:01.787968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:24:01.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.791046 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:24:01.817138 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:24:01.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.818000 audit: BPF prog-id=9 op=LOAD Dec 12 17:24:01.819979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:24:01.822942 dracut-pre-trigger[763]: rd.md=0: removing MD RAID activation Dec 12 17:24:01.845260 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:24:01.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.847136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:24:01.865277 systemd-networkd[808]: lo: Link UP Dec 12 17:24:01.865288 systemd-networkd[808]: lo: Gained carrier Dec 12 17:24:01.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.865927 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:24:01.866856 systemd[1]: Reached target network.target - Network. Dec 12 17:24:01.943087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:24:01.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.945208 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:24:02.034013 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:24:02.043170 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:24:02.048858 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 12 17:24:02.051280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:24:02.053659 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 12 17:24:02.057916 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 12 17:24:02.058322 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:24:02.060246 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:24:02.076324 disk-uuid[873]: Primary Header is updated. Dec 12 17:24:02.076324 disk-uuid[873]: Secondary Entries is updated. Dec 12 17:24:02.076324 disk-uuid[873]: Secondary Header is updated. Dec 12 17:24:02.085479 systemd-networkd[808]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:24:02.087680 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:24:02.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:02.087859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:24:02.087984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:02.088161 systemd-networkd[808]: eth0: Link UP Dec 12 17:24:02.088626 systemd-networkd[808]: eth0: Gained carrier Dec 12 17:24:02.088639 systemd-networkd[808]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:24:02.090171 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:02.095609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:02.109867 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 12 17:24:02.110395 kernel: usbcore: registered new interface driver usbhid Dec 12 17:24:02.111111 kernel: usbhid: USB HID core driver Dec 12 17:24:02.130004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:02.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:02.163983 systemd-networkd[808]: eth0: DHCPv4 address 10.0.7.100/25, gateway 10.0.7.1 acquired from 10.0.7.1 Dec 12 17:24:02.175171 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:24:02.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:02.176298 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:24:02.177778 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:24:02.179576 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:24:02.182139 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:24:02.215325 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:24:02.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.122764 disk-uuid[876]: Warning: The kernel is still using the old partition table. Dec 12 17:24:03.122764 disk-uuid[876]: The new table will be used at the next reboot or after you Dec 12 17:24:03.122764 disk-uuid[876]: run partprobe(8) or kpartx(8) Dec 12 17:24:03.122764 disk-uuid[876]: The operation has completed successfully. Dec 12 17:24:03.133241 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:24:03.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.133357 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:24:03.135368 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:24:03.169889 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (909) Dec 12 17:24:03.171940 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:03.171972 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:03.176870 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:24:03.176908 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:24:03.181865 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:03.182896 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:24:03.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.184745 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:24:03.332783 ignition[928]: Ignition 2.22.0 Dec 12 17:24:03.332801 ignition[928]: Stage: fetch-offline Dec 12 17:24:03.332855 ignition[928]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:03.332865 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:24:03.334885 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:24:03.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.333035 ignition[928]: parsed url from cmdline: "" Dec 12 17:24:03.333039 ignition[928]: no config URL provided Dec 12 17:24:03.333048 ignition[928]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:24:03.337565 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 17:24:03.333056 ignition[928]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:24:03.333060 ignition[928]: failed to fetch config: resource requires networking Dec 12 17:24:03.333302 ignition[928]: Ignition finished successfully Dec 12 17:24:03.363873 ignition[943]: Ignition 2.22.0 Dec 12 17:24:03.363890 ignition[943]: Stage: fetch Dec 12 17:24:03.364036 ignition[943]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:03.364044 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:24:03.364124 ignition[943]: parsed url from cmdline: "" Dec 12 17:24:03.364127 ignition[943]: no config URL provided Dec 12 17:24:03.364132 ignition[943]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:24:03.364138 ignition[943]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:24:03.364578 ignition[943]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 12 17:24:03.364744 ignition[943]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 12 17:24:03.364770 ignition[943]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 12 17:24:03.615159 ignition[943]: GET result: OK Dec 12 17:24:03.615395 ignition[943]: parsing config with SHA512: 36c340bde0155082cb4ab99108e737b88d969cc0f47ebaf333c6b90536ef773c98a7d86a51125dbf17c163c101a0df35f1c6b1ec330dbb93be0c663181552daa Dec 12 17:24:03.620446 unknown[943]: fetched base config from "system" Dec 12 17:24:03.621181 unknown[943]: fetched base config from "system" Dec 12 17:24:03.621189 unknown[943]: fetched user config from "openstack" Dec 12 17:24:03.621510 ignition[943]: fetch: fetch complete Dec 12 17:24:03.621515 ignition[943]: fetch: fetch passed Dec 12 17:24:03.623225 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 17:24:03.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.621562 ignition[943]: Ignition finished successfully Dec 12 17:24:03.625509 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:24:03.665488 ignition[951]: Ignition 2.22.0 Dec 12 17:24:03.665508 ignition[951]: Stage: kargs Dec 12 17:24:03.665647 ignition[951]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:03.665655 ignition[951]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:24:03.666452 ignition[951]: kargs: kargs passed Dec 12 17:24:03.666497 ignition[951]: Ignition finished successfully Dec 12 17:24:03.670508 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:24:03.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.672424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:24:03.707249 ignition[959]: Ignition 2.22.0 Dec 12 17:24:03.707268 ignition[959]: Stage: disks Dec 12 17:24:03.707409 ignition[959]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:03.707418 ignition[959]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:24:03.708165 ignition[959]: disks: disks passed Dec 12 17:24:03.710319 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:24:03.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.708210 ignition[959]: Ignition finished successfully Dec 12 17:24:03.711594 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:24:03.712818 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:24:03.714241 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:24:03.715630 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:24:03.717284 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:24:03.719573 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:24:03.770429 systemd-fsck[969]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 17:24:03.774021 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:24:03.779042 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 17:24:03.779070 kernel: audit: type=1130 audit(1765560243.774:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.776054 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:24:03.782933 systemd-networkd[808]: eth0: Gained IPv6LL Dec 12 17:24:03.883857 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:24:03.884179 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:24:03.885313 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:24:03.888574 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:24:03.890209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:24:03.891046 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:24:03.892809 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 12 17:24:03.894540 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:24:03.894574 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:24:03.903726 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:24:03.906101 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:24:03.922878 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (977) Dec 12 17:24:03.926097 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:03.926117 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:03.933916 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:24:03.933942 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:24:03.935066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:24:03.969865 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:03.979359 initrd-setup-root[1006]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:24:03.985736 initrd-setup-root[1013]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:24:03.990192 initrd-setup-root[1020]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:24:03.993466 initrd-setup-root[1027]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:24:04.099779 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:24:04.103909 kernel: audit: type=1130 audit(1765560244.099:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:04.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:04.102129 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:24:04.105334 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:24:04.122690 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:24:04.124197 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:04.147283 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:24:04.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:04.150862 kernel: audit: type=1130 audit(1765560244.147:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:04.160013 ignition[1095]: INFO : Ignition 2.22.0 Dec 12 17:24:04.160013 ignition[1095]: INFO : Stage: mount Dec 12 17:24:04.161425 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:04.161425 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:24:04.161425 ignition[1095]: INFO : mount: mount passed Dec 12 17:24:04.161425 ignition[1095]: INFO : Ignition finished successfully Dec 12 17:24:04.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:04.163507 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:24:04.167901 kernel: audit: type=1130 audit(1765560244.163:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:05.007902 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:07.015869 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:11.023913 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:11.029352 coreos-metadata[979]: Dec 12 17:24:11.029 WARN failed to locate config-drive, using the metadata service API instead Dec 12 17:24:11.047660 coreos-metadata[979]: Dec 12 17:24:11.047 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 17:24:11.172354 coreos-metadata[979]: Dec 12 17:24:11.172 INFO Fetch successful Dec 12 17:24:11.173270 coreos-metadata[979]: Dec 12 17:24:11.173 INFO wrote hostname ci-4515-1-0-4-1de611bfb5 to /sysroot/etc/hostname Dec 12 17:24:11.175181 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 12 17:24:11.182506 kernel: audit: type=1130 audit(1765560251.175:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:11.182538 kernel: audit: type=1131 audit(1765560251.175:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:11.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:11.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:11.175289 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 12 17:24:11.177382 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:24:11.199284 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:24:11.236853 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1113) Dec 12 17:24:11.242537 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:11.242598 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:11.256425 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:24:11.256509 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:24:11.258141 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:24:11.288742 ignition[1131]: INFO : Ignition 2.22.0 Dec 12 17:24:11.289649 ignition[1131]: INFO : Stage: files Dec 12 17:24:11.289649 ignition[1131]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:11.289649 ignition[1131]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:24:11.292059 ignition[1131]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:24:11.292978 ignition[1131]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:24:11.292978 ignition[1131]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:24:11.302185 ignition[1131]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:24:11.303419 ignition[1131]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:24:11.303419 ignition[1131]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:24:11.302746 unknown[1131]: wrote ssh authorized keys file for user: core Dec 12 17:24:11.307239 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:24:11.308809 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:24:11.385496 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:24:11.636541 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:24:11.636541 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:24:11.640688 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 12 17:24:11.749613 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:24:12.313183 ignition[1131]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:24:12.313183 ignition[1131]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:24:12.316813 ignition[1131]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:24:12.318330 ignition[1131]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:24:12.318330 ignition[1131]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:24:12.318330 ignition[1131]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:24:12.318330 ignition[1131]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:24:12.318330 ignition[1131]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:24:12.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.326942 ignition[1131]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:24:12.326942 ignition[1131]: INFO : files: files passed Dec 12 17:24:12.326942 ignition[1131]: INFO : Ignition finished successfully Dec 12 17:24:12.330885 kernel: audit: type=1130 audit(1765560252.322:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.322375 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:24:12.324932 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:24:12.328347 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:24:12.340326 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:24:12.340459 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:24:12.346591 kernel: audit: type=1130 audit(1765560252.341:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.346617 kernel: audit: type=1131 audit(1765560252.341:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.348323 initrd-setup-root-after-ignition[1164]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:24:12.348323 initrd-setup-root-after-ignition[1164]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:24:12.350918 initrd-setup-root-after-ignition[1168]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:24:12.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.350777 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:24:12.357202 kernel: audit: type=1130 audit(1765560252.350:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.351977 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:24:12.358149 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:24:12.390282 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:24:12.390394 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:24:12.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.392163 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:24:12.398164 kernel: audit: type=1130 audit(1765560252.391:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.398191 kernel: audit: type=1131 audit(1765560252.391:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.397487 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:24:12.399064 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:24:12.400145 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:24:12.439930 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:24:12.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.442169 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:24:12.445576 kernel: audit: type=1130 audit(1765560252.440:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.466286 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:24:12.466506 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:24:12.468381 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:24:12.469982 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:24:12.471526 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:24:12.471655 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:24:12.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.474961 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:24:12.477789 kernel: audit: type=1131 audit(1765560252.472:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.477271 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:24:12.478574 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:24:12.479913 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:24:12.481590 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:24:12.483166 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:24:12.484750 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:24:12.486239 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:24:12.487848 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:24:12.489530 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:24:12.490875 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:24:12.492159 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:24:12.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.492290 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:24:12.494183 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:24:12.495706 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:24:12.497291 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:24:12.498818 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:24:12.500976 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:24:12.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.501105 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:24:12.502959 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:24:12.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.503079 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:24:12.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.504753 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:24:12.504887 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:24:12.507218 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:24:12.509291 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:24:12.510597 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:24:12.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.510714 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:24:12.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.512203 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:24:12.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.512301 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:24:12.513818 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:24:12.513932 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:24:12.518949 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:24:12.521873 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:24:12.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.533942 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:24:12.542849 ignition[1188]: INFO : Ignition 2.22.0 Dec 12 17:24:12.542849 ignition[1188]: INFO : Stage: umount Dec 12 17:24:12.545474 ignition[1188]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:12.545474 ignition[1188]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:24:12.545474 ignition[1188]: INFO : umount: umount passed Dec 12 17:24:12.545474 ignition[1188]: INFO : Ignition finished successfully Dec 12 17:24:12.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.546412 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:24:12.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.546510 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:24:12.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.548251 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:24:12.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.548298 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:24:12.549537 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:24:12.549578 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:24:12.550783 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 17:24:12.550823 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 17:24:12.552982 systemd[1]: Stopped target network.target - Network. Dec 12 17:24:12.553631 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:24:12.553682 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:24:12.557852 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:24:12.559207 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:24:12.563911 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:24:12.565376 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:24:12.566827 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:24:12.568155 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:24:12.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.568193 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:24:12.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.569569 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:24:12.569598 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:24:12.571446 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:24:12.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.571466 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:24:12.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.572762 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:24:12.572813 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:24:12.574138 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:24:12.574178 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:24:12.575531 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:24:12.576864 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:24:12.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.578426 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:24:12.578509 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:24:12.579945 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:24:12.580046 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:24:12.584959 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:24:12.590000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:24:12.585056 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:24:12.595874 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:24:12.596696 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:24:12.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.600013 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:24:12.601724 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:24:12.602612 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:24:12.602000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:24:12.605274 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:24:12.605985 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:24:12.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.606055 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:24:12.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.607663 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:24:12.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.607711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:24:12.609249 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:24:12.609297 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:24:12.610954 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:24:12.625948 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:24:12.626080 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:24:12.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.628155 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:24:12.628223 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:24:12.629510 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:24:12.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.629540 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:24:12.630883 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:24:12.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.630928 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:24:12.633179 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:24:12.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.633228 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:24:12.635304 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:24:12.635353 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:24:12.649699 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:24:12.650642 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:24:12.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.650705 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:24:12.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.652537 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:24:12.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.652582 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:24:12.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.654242 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:24:12.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.654286 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:24:12.656159 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:24:12.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.656201 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:24:12.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:12.657878 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:24:12.657923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:12.660466 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:24:12.661862 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:24:12.662817 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:24:12.662911 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:24:12.664987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:24:12.666677 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:24:12.689807 systemd[1]: Switching root. Dec 12 17:24:12.729615 systemd-journald[417]: Journal stopped Dec 12 17:24:13.647672 systemd-journald[417]: Received SIGTERM from PID 1 (systemd). Dec 12 17:24:13.647745 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:24:13.647763 kernel: SELinux: policy capability open_perms=1 Dec 12 17:24:13.647774 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:24:13.647786 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:24:13.647799 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:24:13.647812 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:24:13.647827 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:24:13.647848 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:24:13.647859 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:24:13.647875 systemd[1]: Successfully loaded SELinux policy in 70.053ms. Dec 12 17:24:13.647896 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.941ms. Dec 12 17:24:13.647908 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:24:13.647920 systemd[1]: Detected virtualization kvm. Dec 12 17:24:13.647931 systemd[1]: Detected architecture arm64. Dec 12 17:24:13.647942 systemd[1]: Detected first boot. Dec 12 17:24:13.647955 systemd[1]: Hostname set to . Dec 12 17:24:13.647966 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:24:13.647978 zram_generator::config[1234]: No configuration found. Dec 12 17:24:13.647995 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:24:13.648006 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:24:13.648016 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:24:13.648027 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:24:13.648037 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:24:13.648049 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:24:13.648063 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:24:13.648073 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:24:13.648084 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:24:13.648096 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:24:13.648107 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:24:13.648118 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:24:13.648131 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:24:13.648142 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:24:13.648153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:24:13.648164 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:24:13.648175 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:24:13.648186 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:24:13.648200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:24:13.648211 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:24:13.648223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:24:13.648234 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:24:13.648245 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:24:13.648256 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:24:13.648268 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:24:13.648279 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:24:13.648290 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:24:13.648300 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:24:13.648311 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:24:13.648322 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:24:13.648333 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:24:13.648343 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:24:13.648372 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:24:13.648386 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:24:13.648396 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:24:13.648409 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:24:13.648419 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:24:13.648430 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:24:13.648441 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:24:13.648453 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:24:13.648464 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:24:13.648475 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:24:13.648485 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:24:13.648496 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:24:13.648507 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:24:13.648518 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:24:13.648532 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:24:13.648544 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:24:13.648556 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:24:13.648567 systemd[1]: Reached target machines.target - Containers. Dec 12 17:24:13.648577 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:24:13.648588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:13.648600 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:24:13.648611 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:24:13.648622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:24:13.648632 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:24:13.648644 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:24:13.648655 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:24:13.648666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:24:13.648677 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:24:13.648688 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:24:13.648699 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:24:13.648710 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:24:13.648722 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:24:13.648733 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:13.648744 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:24:13.648755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:24:13.648767 kernel: fuse: init (API version 7.41) Dec 12 17:24:13.648777 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:24:13.648788 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:24:13.648801 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:24:13.648811 kernel: ACPI: bus type drm_connector registered Dec 12 17:24:13.648822 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:24:13.648842 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:24:13.648853 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:24:13.648864 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:24:13.648875 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:24:13.648888 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:24:13.648900 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:24:13.648910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:24:13.648946 systemd-journald[1306]: Collecting audit messages is enabled. Dec 12 17:24:13.648972 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:24:13.648984 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:24:13.648995 systemd-journald[1306]: Journal started Dec 12 17:24:13.649016 systemd-journald[1306]: Runtime Journal (/run/log/journal/eb0b0c42d0fc408b9e777edcf73be20b) is 8M, max 319.5M, 311.5M free. Dec 12 17:24:13.511000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 17:24:13.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.600000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:24:13.600000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:24:13.601000 audit: BPF prog-id=15 op=LOAD Dec 12 17:24:13.601000 audit: BPF prog-id=16 op=LOAD Dec 12 17:24:13.601000 audit: BPF prog-id=17 op=LOAD Dec 12 17:24:13.644000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:24:13.644000 audit[1306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe3738a00 a2=4000 a3=0 items=0 ppid=1 pid=1306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:13.644000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:24:13.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.424084 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:24:13.447052 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:24:13.447499 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:24:13.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.651061 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:24:13.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.652651 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:24:13.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.654028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:24:13.654182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:24:13.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.655373 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:24:13.655529 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:24:13.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.656706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:24:13.656884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:24:13.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.658157 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:24:13.658315 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:24:13.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.659453 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:24:13.659590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:24:13.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.660867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:24:13.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.662111 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:24:13.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.664180 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:24:13.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.665586 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:24:13.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.677733 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:24:13.679951 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:24:13.682101 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:24:13.687004 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:24:13.687878 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:24:13.687916 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:24:13.689552 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:24:13.690803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:13.690927 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:24:13.703998 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:24:13.705827 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:24:13.706740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:24:13.707792 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:24:13.709478 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:24:13.711135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:24:13.716147 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:24:13.721210 systemd-journald[1306]: Time spent on flushing to /var/log/journal/eb0b0c42d0fc408b9e777edcf73be20b is 35.701ms for 1814 entries. Dec 12 17:24:13.721210 systemd-journald[1306]: System Journal (/var/log/journal/eb0b0c42d0fc408b9e777edcf73be20b) is 8M, max 588.1M, 580.1M free. Dec 12 17:24:13.790588 systemd-journald[1306]: Received client request to flush runtime journal. Dec 12 17:24:13.790655 kernel: loop1: detected capacity change from 0 to 1648 Dec 12 17:24:13.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.720105 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:24:13.723154 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:24:13.726190 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:24:13.728182 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:24:13.730848 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:24:13.733307 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:24:13.736110 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:24:13.751240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:24:13.766507 systemd-tmpfiles[1355]: ACLs are not supported, ignoring. Dec 12 17:24:13.766518 systemd-tmpfiles[1355]: ACLs are not supported, ignoring. Dec 12 17:24:13.770998 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:24:13.774141 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:24:13.795920 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:24:13.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.799854 kernel: loop2: detected capacity change from 0 to 109872 Dec 12 17:24:13.812134 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:24:13.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.834007 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:24:13.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.835000 audit: BPF prog-id=18 op=LOAD Dec 12 17:24:13.835000 audit: BPF prog-id=19 op=LOAD Dec 12 17:24:13.835000 audit: BPF prog-id=20 op=LOAD Dec 12 17:24:13.837038 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:24:13.837000 audit: BPF prog-id=21 op=LOAD Dec 12 17:24:13.839424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:24:13.842988 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:24:13.848864 kernel: loop3: detected capacity change from 0 to 100192 Dec 12 17:24:13.860000 audit: BPF prog-id=22 op=LOAD Dec 12 17:24:13.860000 audit: BPF prog-id=23 op=LOAD Dec 12 17:24:13.860000 audit: BPF prog-id=24 op=LOAD Dec 12 17:24:13.862310 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:24:13.863000 audit: BPF prog-id=25 op=LOAD Dec 12 17:24:13.863000 audit: BPF prog-id=26 op=LOAD Dec 12 17:24:13.863000 audit: BPF prog-id=27 op=LOAD Dec 12 17:24:13.865322 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:24:13.877659 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Dec 12 17:24:13.877679 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Dec 12 17:24:13.882939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:24:13.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.884879 kernel: loop4: detected capacity change from 0 to 211168 Dec 12 17:24:13.897330 systemd-nsresourced[1377]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:24:13.898609 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:24:13.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.906526 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:24:13.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.931864 kernel: loop5: detected capacity change from 0 to 1648 Dec 12 17:24:13.940917 kernel: loop6: detected capacity change from 0 to 109872 Dec 12 17:24:13.953863 kernel: loop7: detected capacity change from 0 to 100192 Dec 12 17:24:13.959531 systemd-oomd[1374]: No swap; memory pressure usage will be degraded Dec 12 17:24:13.960159 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:24:13.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.967865 kernel: loop1: detected capacity change from 0 to 211168 Dec 12 17:24:13.975423 systemd-resolved[1375]: Positive Trust Anchors: Dec 12 17:24:13.975752 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:24:13.975759 systemd-resolved[1375]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:24:13.975791 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:24:13.984096 (sd-merge)[1397]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Dec 12 17:24:13.985114 systemd-resolved[1375]: Using system hostname 'ci-4515-1-0-4-1de611bfb5'. Dec 12 17:24:13.986581 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:24:13.987034 (sd-merge)[1397]: Merged extensions into '/usr'. Dec 12 17:24:13.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.987721 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:24:13.990962 systemd[1]: Reload requested from client PID 1354 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:24:13.990977 systemd[1]: Reloading... Dec 12 17:24:14.046874 zram_generator::config[1427]: No configuration found. Dec 12 17:24:14.203160 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:24:14.203504 systemd[1]: Reloading finished in 212 ms. Dec 12 17:24:14.221135 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:24:14.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.222464 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:24:14.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.249505 systemd[1]: Starting ensure-sysext.service... Dec 12 17:24:14.252000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:24:14.252000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:24:14.252000 audit: BPF prog-id=28 op=LOAD Dec 12 17:24:14.252000 audit: BPF prog-id=29 op=LOAD Dec 12 17:24:14.251233 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:24:14.253397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:24:14.254000 audit: BPF prog-id=30 op=LOAD Dec 12 17:24:14.254000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:24:14.254000 audit: BPF prog-id=31 op=LOAD Dec 12 17:24:14.254000 audit: BPF prog-id=32 op=LOAD Dec 12 17:24:14.254000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:24:14.254000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:24:14.254000 audit: BPF prog-id=33 op=LOAD Dec 12 17:24:14.254000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:24:14.255000 audit: BPF prog-id=34 op=LOAD Dec 12 17:24:14.255000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:24:14.255000 audit: BPF prog-id=35 op=LOAD Dec 12 17:24:14.255000 audit: BPF prog-id=36 op=LOAD Dec 12 17:24:14.255000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:24:14.255000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:24:14.256000 audit: BPF prog-id=37 op=LOAD Dec 12 17:24:14.256000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:24:14.256000 audit: BPF prog-id=38 op=LOAD Dec 12 17:24:14.256000 audit: BPF prog-id=39 op=LOAD Dec 12 17:24:14.256000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:24:14.256000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:24:14.256000 audit: BPF prog-id=40 op=LOAD Dec 12 17:24:14.256000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:24:14.257000 audit: BPF prog-id=41 op=LOAD Dec 12 17:24:14.257000 audit: BPF prog-id=42 op=LOAD Dec 12 17:24:14.257000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:24:14.257000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:24:14.264025 systemd[1]: Reload requested from client PID 1465 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:24:14.264039 systemd[1]: Reloading... Dec 12 17:24:14.269031 systemd-tmpfiles[1466]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:24:14.269064 systemd-tmpfiles[1466]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:24:14.269306 systemd-tmpfiles[1466]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:24:14.270224 systemd-tmpfiles[1466]: ACLs are not supported, ignoring. Dec 12 17:24:14.270295 systemd-tmpfiles[1466]: ACLs are not supported, ignoring. Dec 12 17:24:14.277518 systemd-udevd[1467]: Using default interface naming scheme 'v257'. Dec 12 17:24:14.280179 systemd-tmpfiles[1466]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:24:14.280192 systemd-tmpfiles[1466]: Skipping /boot Dec 12 17:24:14.286524 systemd-tmpfiles[1466]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:24:14.286538 systemd-tmpfiles[1466]: Skipping /boot Dec 12 17:24:14.306854 zram_generator::config[1499]: No configuration found. Dec 12 17:24:14.441997 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 17:24:14.482212 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:24:14.482481 systemd[1]: Reloading finished in 218 ms. Dec 12 17:24:14.493120 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:24:14.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.496000 audit: BPF prog-id=43 op=LOAD Dec 12 17:24:14.496000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:24:14.497000 audit: BPF prog-id=44 op=LOAD Dec 12 17:24:14.497000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:24:14.497000 audit: BPF prog-id=45 op=LOAD Dec 12 17:24:14.497000 audit: BPF prog-id=46 op=LOAD Dec 12 17:24:14.497000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:24:14.497000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:24:14.497000 audit: BPF prog-id=47 op=LOAD Dec 12 17:24:14.497000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:24:14.497000 audit: BPF prog-id=48 op=LOAD Dec 12 17:24:14.497000 audit: BPF prog-id=49 op=LOAD Dec 12 17:24:14.497000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:24:14.497000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:24:14.498000 audit: BPF prog-id=50 op=LOAD Dec 12 17:24:14.498000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:24:14.498000 audit: BPF prog-id=51 op=LOAD Dec 12 17:24:14.498000 audit: BPF prog-id=52 op=LOAD Dec 12 17:24:14.498000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:24:14.498000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:24:14.499000 audit: BPF prog-id=53 op=LOAD Dec 12 17:24:14.499000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:24:14.499000 audit: BPF prog-id=54 op=LOAD Dec 12 17:24:14.499000 audit: BPF prog-id=55 op=LOAD Dec 12 17:24:14.499000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:24:14.499000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:24:14.499000 audit: BPF prog-id=56 op=LOAD Dec 12 17:24:14.499000 audit: BPF prog-id=57 op=LOAD Dec 12 17:24:14.499000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:24:14.499000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:24:14.503962 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:24:14.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.530977 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Dec 12 17:24:14.531052 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 12 17:24:14.531065 kernel: [drm] features: -context_init Dec 12 17:24:14.566755 systemd[1]: Finished ensure-sysext.service. Dec 12 17:24:14.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.590892 kernel: [drm] number of scanouts: 1 Dec 12 17:24:14.590968 kernel: [drm] number of cap sets: 0 Dec 12 17:24:14.592572 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Dec 12 17:24:14.596194 kernel: Console: switching to colour frame buffer device 160x50 Dec 12 17:24:14.599880 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 12 17:24:14.614907 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:24:14.625461 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:24:14.627695 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:24:14.628829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:14.638805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:24:14.641669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:24:14.643547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:24:14.646267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:24:14.649747 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Dec 12 17:24:14.651214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:14.651391 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:24:14.653011 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:24:14.655937 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:24:14.657107 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:14.658596 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:24:14.663039 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 12 17:24:14.663084 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 12 17:24:14.660000 audit: BPF prog-id=58 op=LOAD Dec 12 17:24:14.662690 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:24:14.664312 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:24:14.666855 kernel: PTP clock support registered Dec 12 17:24:14.668191 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:24:14.670744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:14.673806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:24:14.680227 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:24:14.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.682509 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:24:14.682694 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:24:14.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.684000 audit[1606]: SYSTEM_BOOT pid=1606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.686117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:24:14.686321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:24:14.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.688213 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:24:14.689651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:24:14.692491 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Dec 12 17:24:14.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.692684 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Dec 12 17:24:14.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.694525 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:24:14.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.703494 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:24:14.703633 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:24:14.708011 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:24:14.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.710086 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:24:14.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:14.728247 augenrules[1632]: No rules Dec 12 17:24:14.727000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:24:14.727000 audit[1632]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc76dfa60 a2=420 a3=0 items=0 ppid=1587 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:14.727000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:24:14.729649 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:24:14.730951 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:24:14.766532 systemd-networkd[1605]: lo: Link UP Dec 12 17:24:14.766540 systemd-networkd[1605]: lo: Gained carrier Dec 12 17:24:14.767968 systemd-networkd[1605]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:24:14.767978 systemd-networkd[1605]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:24:14.767984 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:24:14.769512 systemd[1]: Reached target network.target - Network. Dec 12 17:24:14.769815 systemd-networkd[1605]: eth0: Link UP Dec 12 17:24:14.770321 systemd-networkd[1605]: eth0: Gained carrier Dec 12 17:24:14.770343 systemd-networkd[1605]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:24:14.771631 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:24:14.774978 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:24:14.784943 systemd-networkd[1605]: eth0: DHCPv4 address 10.0.7.100/25, gateway 10.0.7.1 acquired from 10.0.7.1 Dec 12 17:24:14.789370 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:24:14.791100 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:24:14.795362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:14.801805 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:24:15.347859 ldconfig[1600]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:24:15.352412 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:24:15.354826 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:24:15.377924 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:24:15.379042 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:24:15.379973 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:24:15.380952 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:24:15.382064 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:24:15.382990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:24:15.384000 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:24:15.385068 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:24:15.385923 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:24:15.386862 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:24:15.386895 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:24:15.387579 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:24:15.390023 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:24:15.392211 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:24:15.394951 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:24:15.396066 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:24:15.397094 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:24:15.399846 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:24:15.400999 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:24:15.402505 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:24:15.403483 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:24:15.404277 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:24:15.405076 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:24:15.405116 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:24:15.408188 systemd[1]: Starting chronyd.service - NTP client/server... Dec 12 17:24:15.409920 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:24:15.411863 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 17:24:15.415002 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:24:15.416897 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:24:15.418906 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:24:15.420884 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:15.421971 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:24:15.424994 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:24:15.426075 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:24:15.434903 jq[1657]: false Dec 12 17:24:15.431034 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:24:15.433014 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:24:15.434853 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:24:15.441960 extend-filesystems[1660]: Found /dev/vda6 Dec 12 17:24:15.444514 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:24:15.445864 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:24:15.446437 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:24:15.447103 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:24:15.453258 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:24:15.453555 chronyd[1652]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 12 17:24:15.454145 extend-filesystems[1660]: Found /dev/vda9 Dec 12 17:24:15.455644 extend-filesystems[1660]: Checking size of /dev/vda9 Dec 12 17:24:15.457185 chronyd[1652]: Loaded seccomp filter (level 2) Dec 12 17:24:15.458463 systemd[1]: Started chronyd.service - NTP client/server. Dec 12 17:24:15.460567 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:24:15.462422 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:24:15.463924 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:24:15.465608 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:24:15.465853 jq[1674]: true Dec 12 17:24:15.466338 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:24:15.477022 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:24:15.484902 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:24:15.485075 extend-filesystems[1660]: Resized partition /dev/vda9 Dec 12 17:24:15.494468 jq[1692]: true Dec 12 17:24:15.494944 update_engine[1671]: I20251212 17:24:15.493775 1671 main.cc:92] Flatcar Update Engine starting Dec 12 17:24:15.496686 extend-filesystems[1705]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:24:15.505302 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Dec 12 17:24:15.507445 tar[1684]: linux-arm64/LICENSE Dec 12 17:24:15.507445 tar[1684]: linux-arm64/helm Dec 12 17:24:15.538970 dbus-daemon[1655]: [system] SELinux support is enabled Dec 12 17:24:15.539219 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:24:15.542522 update_engine[1671]: I20251212 17:24:15.542377 1671 update_check_scheduler.cc:74] Next update check in 2m59s Dec 12 17:24:15.543746 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:24:15.543789 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:24:15.545092 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:24:15.545117 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:24:15.549008 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:24:15.555075 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:24:15.561017 systemd-logind[1668]: New seat seat0. Dec 12 17:24:15.563488 systemd-logind[1668]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:24:15.563748 systemd-logind[1668]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 12 17:24:15.564034 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:24:15.620470 locksmithd[1723]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:24:15.653844 containerd[1693]: time="2025-12-12T17:24:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:24:15.655285 containerd[1693]: time="2025-12-12T17:24:15.655245440Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:24:15.657161 bash[1724]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:24:15.658987 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:24:15.663149 systemd[1]: Starting sshkeys.service... Dec 12 17:24:15.675024 containerd[1693]: time="2025-12-12T17:24:15.674981120Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.4µs" Dec 12 17:24:15.675024 containerd[1693]: time="2025-12-12T17:24:15.675021680Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:24:15.675093 containerd[1693]: time="2025-12-12T17:24:15.675065520Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:24:15.675093 containerd[1693]: time="2025-12-12T17:24:15.675076720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:24:15.675328 containerd[1693]: time="2025-12-12T17:24:15.675304120Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:24:15.675434 containerd[1693]: time="2025-12-12T17:24:15.675412400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:24:15.675531 containerd[1693]: time="2025-12-12T17:24:15.675509680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:24:15.675557 containerd[1693]: time="2025-12-12T17:24:15.675529640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678100 containerd[1693]: time="2025-12-12T17:24:15.678062200Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678139 containerd[1693]: time="2025-12-12T17:24:15.678098720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678139 containerd[1693]: time="2025-12-12T17:24:15.678116520Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678139 containerd[1693]: time="2025-12-12T17:24:15.678126720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678352 containerd[1693]: time="2025-12-12T17:24:15.678320520Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678352 containerd[1693]: time="2025-12-12T17:24:15.678349080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678441 containerd[1693]: time="2025-12-12T17:24:15.678424400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678614 containerd[1693]: time="2025-12-12T17:24:15.678593920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678650 containerd[1693]: time="2025-12-12T17:24:15.678633640Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:24:15.678677 containerd[1693]: time="2025-12-12T17:24:15.678647600Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:24:15.679056 containerd[1693]: time="2025-12-12T17:24:15.679031560Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:24:15.679447 containerd[1693]: time="2025-12-12T17:24:15.679426400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:24:15.679550 containerd[1693]: time="2025-12-12T17:24:15.679530680Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:24:15.683508 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 17:24:15.687248 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 17:24:15.709878 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:15.716646 containerd[1693]: time="2025-12-12T17:24:15.716606880Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:24:15.716986 containerd[1693]: time="2025-12-12T17:24:15.716952400Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:24:15.717266 containerd[1693]: time="2025-12-12T17:24:15.717198320Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:24:15.717266 containerd[1693]: time="2025-12-12T17:24:15.717231120Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:24:15.717266 containerd[1693]: time="2025-12-12T17:24:15.717246440Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:24:15.717452 containerd[1693]: time="2025-12-12T17:24:15.717431480Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:24:15.717615 containerd[1693]: time="2025-12-12T17:24:15.717525520Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:24:15.717615 containerd[1693]: time="2025-12-12T17:24:15.717542840Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:24:15.717615 containerd[1693]: time="2025-12-12T17:24:15.717556440Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:24:15.717615 containerd[1693]: time="2025-12-12T17:24:15.717570000Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:24:15.717852 containerd[1693]: time="2025-12-12T17:24:15.717812720Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:24:15.717942 containerd[1693]: time="2025-12-12T17:24:15.717926640Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:24:15.718077 containerd[1693]: time="2025-12-12T17:24:15.718059000Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:24:15.718155 containerd[1693]: time="2025-12-12T17:24:15.718140880Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:24:15.718554 containerd[1693]: time="2025-12-12T17:24:15.718478320Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:24:15.718654 containerd[1693]: time="2025-12-12T17:24:15.718629480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718720120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718739200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718751280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718762840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718774560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718784880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718796520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718807720Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718825280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718879600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718930880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718944040Z" level=info msg="Start snapshots syncer" Dec 12 17:24:15.719261 containerd[1693]: time="2025-12-12T17:24:15.718977120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:24:15.719555 containerd[1693]: time="2025-12-12T17:24:15.719232880Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:24:15.719555 containerd[1693]: time="2025-12-12T17:24:15.719448000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:24:15.719675 containerd[1693]: time="2025-12-12T17:24:15.719521880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.719690360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.719721320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.719803560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.719815320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.719887880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.719902720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.719913000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.719924680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.719985160Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.720027120Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.720040600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.720061560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.720112400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:24:15.721843 containerd[1693]: time="2025-12-12T17:24:15.720124760Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:24:15.722115 containerd[1693]: time="2025-12-12T17:24:15.720146520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:24:15.722115 containerd[1693]: time="2025-12-12T17:24:15.720157680Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:24:15.722115 containerd[1693]: time="2025-12-12T17:24:15.720170920Z" level=info msg="runtime interface created" Dec 12 17:24:15.722115 containerd[1693]: time="2025-12-12T17:24:15.720176280Z" level=info msg="created NRI interface" Dec 12 17:24:15.722115 containerd[1693]: time="2025-12-12T17:24:15.720190760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:24:15.722115 containerd[1693]: time="2025-12-12T17:24:15.720202640Z" level=info msg="Connect containerd service" Dec 12 17:24:15.722115 containerd[1693]: time="2025-12-12T17:24:15.720280600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:24:15.722115 containerd[1693]: time="2025-12-12T17:24:15.721844360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:24:15.810178 containerd[1693]: time="2025-12-12T17:24:15.810113520Z" level=info msg="Start subscribing containerd event" Dec 12 17:24:15.810371 containerd[1693]: time="2025-12-12T17:24:15.810218920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:24:15.810431 containerd[1693]: time="2025-12-12T17:24:15.810343000Z" level=info msg="Start recovering state" Dec 12 17:24:15.810558 containerd[1693]: time="2025-12-12T17:24:15.810503080Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:24:15.810615 containerd[1693]: time="2025-12-12T17:24:15.810598120Z" level=info msg="Start event monitor" Dec 12 17:24:15.810640 containerd[1693]: time="2025-12-12T17:24:15.810616880Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:24:15.810640 containerd[1693]: time="2025-12-12T17:24:15.810627280Z" level=info msg="Start streaming server" Dec 12 17:24:15.810640 containerd[1693]: time="2025-12-12T17:24:15.810634640Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:24:15.810741 containerd[1693]: time="2025-12-12T17:24:15.810726880Z" level=info msg="runtime interface starting up..." Dec 12 17:24:15.810741 containerd[1693]: time="2025-12-12T17:24:15.810739720Z" level=info msg="starting plugins..." Dec 12 17:24:15.810788 containerd[1693]: time="2025-12-12T17:24:15.810755840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:24:15.811214 containerd[1693]: time="2025-12-12T17:24:15.811017480Z" level=info msg="containerd successfully booted in 0.158618s" Dec 12 17:24:15.811264 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:24:15.838937 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Dec 12 17:24:15.864278 extend-filesystems[1705]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:24:15.864278 extend-filesystems[1705]: old_desc_blocks = 1, new_desc_blocks = 6 Dec 12 17:24:15.864278 extend-filesystems[1705]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Dec 12 17:24:15.869906 extend-filesystems[1660]: Resized filesystem in /dev/vda9 Dec 12 17:24:15.865714 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:24:15.866042 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:24:15.965997 tar[1684]: linux-arm64/README.md Dec 12 17:24:15.984810 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:24:16.324469 sshd_keygen[1680]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:24:16.344077 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:24:16.347309 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:24:16.372957 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:24:16.374891 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:24:16.377500 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:24:16.399228 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:24:16.402223 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:24:16.404591 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:24:16.405867 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:24:16.436874 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:16.720909 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:16.775029 systemd-networkd[1605]: eth0: Gained IPv6LL Dec 12 17:24:16.777542 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:24:16.779234 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:24:16.781463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:24:16.783366 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:24:16.811394 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:24:17.721409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:24:17.725127 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:24:18.291371 kubelet[1796]: E1212 17:24:18.291303 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:24:18.293945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:24:18.294075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:24:18.294591 systemd[1]: kubelet.service: Consumed 792ms CPU time, 259.1M memory peak. Dec 12 17:24:18.449072 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:18.728877 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:22.458870 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:22.464406 coreos-metadata[1654]: Dec 12 17:24:22.464 WARN failed to locate config-drive, using the metadata service API instead Dec 12 17:24:22.480711 coreos-metadata[1654]: Dec 12 17:24:22.480 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 12 17:24:22.740885 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:24:22.742098 coreos-metadata[1654]: Dec 12 17:24:22.742 INFO Fetch successful Dec 12 17:24:22.742327 coreos-metadata[1654]: Dec 12 17:24:22.742 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 17:24:22.746984 coreos-metadata[1740]: Dec 12 17:24:22.746 WARN failed to locate config-drive, using the metadata service API instead Dec 12 17:24:22.759589 coreos-metadata[1740]: Dec 12 17:24:22.759 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 12 17:24:22.983282 coreos-metadata[1654]: Dec 12 17:24:22.983 INFO Fetch successful Dec 12 17:24:22.983457 coreos-metadata[1654]: Dec 12 17:24:22.983 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 12 17:24:23.069331 coreos-metadata[1740]: Dec 12 17:24:23.069 INFO Fetch successful Dec 12 17:24:23.069331 coreos-metadata[1740]: Dec 12 17:24:23.069 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 17:24:23.227970 coreos-metadata[1654]: Dec 12 17:24:23.227 INFO Fetch successful Dec 12 17:24:23.228150 coreos-metadata[1654]: Dec 12 17:24:23.228 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 12 17:24:23.313682 coreos-metadata[1740]: Dec 12 17:24:23.313 INFO Fetch successful Dec 12 17:24:23.315870 unknown[1740]: wrote ssh authorized keys file for user: core Dec 12 17:24:23.346529 update-ssh-keys[1816]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:24:23.347553 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 17:24:23.349428 systemd[1]: Finished sshkeys.service. Dec 12 17:24:23.357967 coreos-metadata[1654]: Dec 12 17:24:23.357 INFO Fetch successful Dec 12 17:24:23.358092 coreos-metadata[1654]: Dec 12 17:24:23.358 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 12 17:24:24.869900 coreos-metadata[1654]: Dec 12 17:24:24.869 INFO Fetch successful Dec 12 17:24:24.870318 coreos-metadata[1654]: Dec 12 17:24:24.869 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 12 17:24:24.991259 coreos-metadata[1654]: Dec 12 17:24:24.991 INFO Fetch successful Dec 12 17:24:25.022626 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 17:24:25.023124 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:24:25.023264 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:24:25.023385 systemd[1]: Startup finished in 2.395s (kernel) + 11.703s (initrd) + 12.228s (userspace) = 26.327s. Dec 12 17:24:28.545066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:24:28.546551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:24:29.313388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:24:29.317295 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:24:29.540778 kubelet[1832]: E1212 17:24:29.540732 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:24:29.544099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:24:29.544237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:24:29.545934 systemd[1]: kubelet.service: Consumed 155ms CPU time, 106.3M memory peak. Dec 12 17:24:30.445048 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:24:30.446280 systemd[1]: Started sshd@0-10.0.7.100:22-139.178.89.65:51728.service - OpenSSH per-connection server daemon (139.178.89.65:51728). Dec 12 17:24:31.380013 sshd[1841]: Accepted publickey for core from 139.178.89.65 port 51728 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:31.383393 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:31.390251 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:24:31.391110 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:24:31.394767 systemd-logind[1668]: New session 1 of user core. Dec 12 17:24:31.412677 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:24:31.416985 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:24:31.434737 (systemd)[1846]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:24:31.437249 systemd-logind[1668]: New session c1 of user core. Dec 12 17:24:31.549765 systemd[1846]: Queued start job for default target default.target. Dec 12 17:24:31.571048 systemd[1846]: Created slice app.slice - User Application Slice. Dec 12 17:24:31.571082 systemd[1846]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:24:31.571094 systemd[1846]: Reached target paths.target - Paths. Dec 12 17:24:31.571142 systemd[1846]: Reached target timers.target - Timers. Dec 12 17:24:31.572335 systemd[1846]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:24:31.573046 systemd[1846]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:24:31.582323 systemd[1846]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:24:31.582697 systemd[1846]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:24:31.582853 systemd[1846]: Reached target sockets.target - Sockets. Dec 12 17:24:31.582990 systemd[1846]: Reached target basic.target - Basic System. Dec 12 17:24:31.583097 systemd[1846]: Reached target default.target - Main User Target. Dec 12 17:24:31.583192 systemd[1846]: Startup finished in 140ms. Dec 12 17:24:31.583206 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:24:31.594132 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:24:32.104240 systemd[1]: Started sshd@1-10.0.7.100:22-139.178.89.65:51732.service - OpenSSH per-connection server daemon (139.178.89.65:51732). Dec 12 17:24:32.927189 sshd[1859]: Accepted publickey for core from 139.178.89.65 port 51732 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:32.928617 sshd-session[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:32.933502 systemd-logind[1668]: New session 2 of user core. Dec 12 17:24:32.940996 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:24:33.406066 sshd[1862]: Connection closed by 139.178.89.65 port 51732 Dec 12 17:24:33.406820 sshd-session[1859]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:33.410476 systemd[1]: sshd@1-10.0.7.100:22-139.178.89.65:51732.service: Deactivated successfully. Dec 12 17:24:33.413240 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:24:33.414020 systemd-logind[1668]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:24:33.415108 systemd-logind[1668]: Removed session 2. Dec 12 17:24:33.570110 systemd[1]: Started sshd@2-10.0.7.100:22-139.178.89.65:51744.service - OpenSSH per-connection server daemon (139.178.89.65:51744). Dec 12 17:24:34.393758 sshd[1868]: Accepted publickey for core from 139.178.89.65 port 51744 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:34.395225 sshd-session[1868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:34.399551 systemd-logind[1668]: New session 3 of user core. Dec 12 17:24:34.414201 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:24:34.859873 sshd[1871]: Connection closed by 139.178.89.65 port 51744 Dec 12 17:24:34.860499 sshd-session[1868]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:34.864197 systemd-logind[1668]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:24:34.864394 systemd[1]: sshd@2-10.0.7.100:22-139.178.89.65:51744.service: Deactivated successfully. Dec 12 17:24:34.865973 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:24:34.868704 systemd-logind[1668]: Removed session 3. Dec 12 17:24:35.029692 systemd[1]: Started sshd@3-10.0.7.100:22-139.178.89.65:51758.service - OpenSSH per-connection server daemon (139.178.89.65:51758). Dec 12 17:24:35.851671 sshd[1877]: Accepted publickey for core from 139.178.89.65 port 51758 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:35.853394 sshd-session[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:35.857900 systemd-logind[1668]: New session 4 of user core. Dec 12 17:24:35.875052 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:24:36.320995 sshd[1880]: Connection closed by 139.178.89.65 port 51758 Dec 12 17:24:36.321210 sshd-session[1877]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:36.325047 systemd[1]: sshd@3-10.0.7.100:22-139.178.89.65:51758.service: Deactivated successfully. Dec 12 17:24:36.327207 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:24:36.327986 systemd-logind[1668]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:24:36.329020 systemd-logind[1668]: Removed session 4. Dec 12 17:24:36.501128 systemd[1]: Started sshd@4-10.0.7.100:22-139.178.89.65:51774.service - OpenSSH per-connection server daemon (139.178.89.65:51774). Dec 12 17:24:37.390069 sshd[1886]: Accepted publickey for core from 139.178.89.65 port 51774 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:37.391470 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:37.395363 systemd-logind[1668]: New session 5 of user core. Dec 12 17:24:37.410131 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:24:37.742905 sudo[1890]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:24:37.743347 sudo[1890]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:24:37.762905 sudo[1890]: pam_unix(sudo:session): session closed for user root Dec 12 17:24:37.930170 sshd[1889]: Connection closed by 139.178.89.65 port 51774 Dec 12 17:24:37.930987 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:37.935110 systemd[1]: sshd@4-10.0.7.100:22-139.178.89.65:51774.service: Deactivated successfully. Dec 12 17:24:37.936698 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:24:37.937444 systemd-logind[1668]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:24:37.938490 systemd-logind[1668]: Removed session 5. Dec 12 17:24:38.100386 systemd[1]: Started sshd@5-10.0.7.100:22-139.178.89.65:51788.service - OpenSSH per-connection server daemon (139.178.89.65:51788). Dec 12 17:24:38.910784 sshd[1896]: Accepted publickey for core from 139.178.89.65 port 51788 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:38.912135 sshd-session[1896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:38.916535 systemd-logind[1668]: New session 6 of user core. Dec 12 17:24:38.932005 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:24:39.226965 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:24:39.227216 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:24:39.232024 sudo[1901]: pam_unix(sudo:session): session closed for user root Dec 12 17:24:39.237824 sudo[1900]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:24:39.238102 sudo[1900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:24:39.246848 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:24:39.279000 chronyd[1652]: Selected source PHC0 Dec 12 17:24:39.281000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:24:39.282976 kernel: kauditd_printk_skb: 188 callbacks suppressed Dec 12 17:24:39.283022 kernel: audit: type=1305 audit(1765560279.281:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:24:39.283092 augenrules[1923]: No rules Dec 12 17:24:39.284410 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:24:39.281000 audit[1923]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd9ebd1e0 a2=420 a3=0 items=0 ppid=1904 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:39.285953 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:24:39.288139 kernel: audit: type=1300 audit(1765560279.281:232): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd9ebd1e0 a2=420 a3=0 items=0 ppid=1904 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:39.281000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:24:39.289092 sudo[1900]: pam_unix(sudo:session): session closed for user root Dec 12 17:24:39.290073 kernel: audit: type=1327 audit(1765560279.281:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:24:39.290142 kernel: audit: type=1130 audit(1765560279.285:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.294487 kernel: audit: type=1131 audit(1765560279.285:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.294576 kernel: audit: type=1106 audit(1765560279.288:235): pid=1900 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.288000 audit[1900]: USER_END pid=1900 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.297045 kernel: audit: type=1104 audit(1765560279.288:236): pid=1900 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.288000 audit[1900]: CRED_DISP pid=1900 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.457951 sshd[1899]: Connection closed by 139.178.89.65 port 51788 Dec 12 17:24:39.458076 sshd-session[1896]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:39.458000 audit[1896]: USER_END pid=1896 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:39.462201 systemd[1]: sshd@5-10.0.7.100:22-139.178.89.65:51788.service: Deactivated successfully. Dec 12 17:24:39.458000 audit[1896]: CRED_DISP pid=1896 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:39.464126 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:24:39.464902 systemd-logind[1668]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:24:39.465795 kernel: audit: type=1106 audit(1765560279.458:237): pid=1896 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:39.465870 kernel: audit: type=1104 audit(1765560279.458:238): pid=1896 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:39.465891 kernel: audit: type=1131 audit(1765560279.460:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.7.100:22-139.178.89.65:51788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.7.100:22-139.178.89.65:51788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.466336 systemd-logind[1668]: Removed session 6. Dec 12 17:24:39.632448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:24:39.634830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:24:39.636100 systemd[1]: Started sshd@6-10.0.7.100:22-139.178.89.65:51800.service - OpenSSH per-connection server daemon (139.178.89.65:51800). Dec 12 17:24:39.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.7.100:22-139.178.89.65:51800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.752757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:24:39.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.756629 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:24:40.346029 kubelet[1943]: E1212 17:24:40.345954 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:24:40.348698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:24:40.348838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:24:40.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:24:40.351013 systemd[1]: kubelet.service: Consumed 151ms CPU time, 111.8M memory peak. Dec 12 17:24:40.510000 audit[1933]: USER_ACCT pid=1933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:40.511947 sshd[1933]: Accepted publickey for core from 139.178.89.65 port 51800 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:40.511000 audit[1933]: CRED_ACQ pid=1933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:40.511000 audit[1933]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffad5abb0 a2=3 a3=0 items=0 ppid=1 pid=1933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:40.511000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:40.513429 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:40.519018 systemd-logind[1668]: New session 7 of user core. Dec 12 17:24:40.530142 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:24:40.531000 audit[1933]: USER_START pid=1933 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:40.533000 audit[1952]: CRED_ACQ pid=1952 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:40.851129 sudo[1953]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:24:40.849000 audit[1953]: USER_ACCT pid=1953 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:24:40.849000 audit[1953]: CRED_REFR pid=1953 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:24:40.851880 sudo[1953]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:24:40.853000 audit[1953]: USER_START pid=1953 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:24:41.575347 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:24:41.594207 (dockerd)[1974]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:24:42.074501 dockerd[1974]: time="2025-12-12T17:24:42.073672696Z" level=info msg="Starting up" Dec 12 17:24:42.075929 dockerd[1974]: time="2025-12-12T17:24:42.075856187Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:24:42.087200 dockerd[1974]: time="2025-12-12T17:24:42.087161044Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:24:42.142849 systemd[1]: var-lib-docker-metacopy\x2dcheck2885135731-merged.mount: Deactivated successfully. Dec 12 17:24:42.153881 dockerd[1974]: time="2025-12-12T17:24:42.153832446Z" level=info msg="Loading containers: start." Dec 12 17:24:42.178944 kernel: Initializing XFRM netlink socket Dec 12 17:24:42.253000 audit[2025]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.253000 audit[2025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc16b23e0 a2=0 a3=0 items=0 ppid=1974 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.253000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:24:42.256000 audit[2027]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.256000 audit[2027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd14ea6a0 a2=0 a3=0 items=0 ppid=1974 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:24:42.258000 audit[2029]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.258000 audit[2029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe941a050 a2=0 a3=0 items=0 ppid=1974 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:24:42.260000 audit[2031]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.260000 audit[2031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff90b4770 a2=0 a3=0 items=0 ppid=1974 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:24:42.262000 audit[2033]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.262000 audit[2033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3689620 a2=0 a3=0 items=0 ppid=1974 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:24:42.264000 audit[2035]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.264000 audit[2035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd35b9930 a2=0 a3=0 items=0 ppid=1974 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.264000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:24:42.265000 audit[2037]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.265000 audit[2037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd0905990 a2=0 a3=0 items=0 ppid=1974 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:24:42.266000 audit[2039]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.266000 audit[2039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffe2442250 a2=0 a3=0 items=0 ppid=1974 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:24:42.315000 audit[2042]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.315000 audit[2042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd6c76d10 a2=0 a3=0 items=0 ppid=1974 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.315000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:24:42.317000 audit[2044]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.317000 audit[2044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffcf99e80 a2=0 a3=0 items=0 ppid=1974 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.317000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:24:42.318000 audit[2046]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.318000 audit[2046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe86f8060 a2=0 a3=0 items=0 ppid=1974 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.318000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:24:42.321000 audit[2048]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.321000 audit[2048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffcc1aef60 a2=0 a3=0 items=0 ppid=1974 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:24:42.323000 audit[2050]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.323000 audit[2050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffffe7fb10 a2=0 a3=0 items=0 ppid=1974 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.323000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:24:42.362000 audit[2080]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.362000 audit[2080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe2256d20 a2=0 a3=0 items=0 ppid=1974 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.362000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:24:42.364000 audit[2082]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.364000 audit[2082]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe38d2f10 a2=0 a3=0 items=0 ppid=1974 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.364000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:24:42.366000 audit[2084]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.366000 audit[2084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf55e150 a2=0 a3=0 items=0 ppid=1974 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:24:42.368000 audit[2086]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.368000 audit[2086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8278070 a2=0 a3=0 items=0 ppid=1974 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.368000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:24:42.370000 audit[2088]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.370000 audit[2088]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd06fc510 a2=0 a3=0 items=0 ppid=1974 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.370000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:24:42.373000 audit[2090]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.373000 audit[2090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc359d4c0 a2=0 a3=0 items=0 ppid=1974 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.373000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:24:42.374000 audit[2092]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.374000 audit[2092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffe79cb60 a2=0 a3=0 items=0 ppid=1974 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.374000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:24:42.376000 audit[2094]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.376000 audit[2094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffff436320 a2=0 a3=0 items=0 ppid=1974 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.376000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:24:42.378000 audit[2096]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.378000 audit[2096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffdb379410 a2=0 a3=0 items=0 ppid=1974 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:24:42.380000 audit[2098]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.380000 audit[2098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe67284b0 a2=0 a3=0 items=0 ppid=1974 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:24:42.382000 audit[2100]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.382000 audit[2100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd21786b0 a2=0 a3=0 items=0 ppid=1974 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.382000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:24:42.384000 audit[2102]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.384000 audit[2102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe5b576f0 a2=0 a3=0 items=0 ppid=1974 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.384000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:24:42.386000 audit[2104]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.386000 audit[2104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffce84a8a0 a2=0 a3=0 items=0 ppid=1974 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.386000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:24:42.391000 audit[2109]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.391000 audit[2109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdbd83610 a2=0 a3=0 items=0 ppid=1974 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.391000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:24:42.393000 audit[2111]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.393000 audit[2111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcc5eeeb0 a2=0 a3=0 items=0 ppid=1974 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.393000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:24:42.395000 audit[2113]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.395000 audit[2113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdfa5b480 a2=0 a3=0 items=0 ppid=1974 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.395000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:24:42.397000 audit[2115]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.397000 audit[2115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffa51dc00 a2=0 a3=0 items=0 ppid=1974 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.397000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:24:42.399000 audit[2117]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.399000 audit[2117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe29687f0 a2=0 a3=0 items=0 ppid=1974 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.399000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:24:42.401000 audit[2119]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:24:42.401000 audit[2119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcc2fb270 a2=0 a3=0 items=0 ppid=1974 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.401000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:24:42.422000 audit[2124]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.422000 audit[2124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffff5e11750 a2=0 a3=0 items=0 ppid=1974 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:24:42.423000 audit[2126]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.423000 audit[2126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffffe36da40 a2=0 a3=0 items=0 ppid=1974 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.423000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:24:42.432000 audit[2134]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.432000 audit[2134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffcbc9bcf0 a2=0 a3=0 items=0 ppid=1974 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.432000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:24:42.442000 audit[2140]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.442000 audit[2140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcb889d30 a2=0 a3=0 items=0 ppid=1974 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:24:42.445000 audit[2142]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.445000 audit[2142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff1dd0be0 a2=0 a3=0 items=0 ppid=1974 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.445000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:24:42.446000 audit[2144]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.446000 audit[2144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffebbaea30 a2=0 a3=0 items=0 ppid=1974 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.446000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:24:42.448000 audit[2146]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.448000 audit[2146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff6fcbe30 a2=0 a3=0 items=0 ppid=1974 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.448000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:24:42.449000 audit[2148]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:24:42.449000 audit[2148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffdafeae0 a2=0 a3=0 items=0 ppid=1974 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:42.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:24:42.452369 systemd-networkd[1605]: docker0: Link UP Dec 12 17:24:42.471206 dockerd[1974]: time="2025-12-12T17:24:42.471125405Z" level=info msg="Loading containers: done." Dec 12 17:24:42.484606 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3589074379-merged.mount: Deactivated successfully. Dec 12 17:24:42.498210 dockerd[1974]: time="2025-12-12T17:24:42.498159129Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:24:42.498394 dockerd[1974]: time="2025-12-12T17:24:42.498292727Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:24:42.498517 dockerd[1974]: time="2025-12-12T17:24:42.498488811Z" level=info msg="Initializing buildkit" Dec 12 17:24:42.522179 dockerd[1974]: time="2025-12-12T17:24:42.522113138Z" level=info msg="Completed buildkit initialization" Dec 12 17:24:42.528253 dockerd[1974]: time="2025-12-12T17:24:42.528198252Z" level=info msg="Daemon has completed initialization" Dec 12 17:24:42.528331 dockerd[1974]: time="2025-12-12T17:24:42.528262429Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:24:42.529137 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:24:42.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.944413 containerd[1693]: time="2025-12-12T17:24:43.944378046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 17:24:44.659557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871061659.mount: Deactivated successfully. Dec 12 17:24:45.735657 containerd[1693]: time="2025-12-12T17:24:45.735588964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:45.736637 containerd[1693]: time="2025-12-12T17:24:45.736570609Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Dec 12 17:24:45.737589 containerd[1693]: time="2025-12-12T17:24:45.737548414Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:45.743269 containerd[1693]: time="2025-12-12T17:24:45.743213323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:45.744448 containerd[1693]: time="2025-12-12T17:24:45.744209448Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.799795402s" Dec 12 17:24:45.744448 containerd[1693]: time="2025-12-12T17:24:45.744248168Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 12 17:24:45.745966 containerd[1693]: time="2025-12-12T17:24:45.745918537Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 17:24:47.294584 containerd[1693]: time="2025-12-12T17:24:47.294525445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:47.296235 containerd[1693]: time="2025-12-12T17:24:47.296199254Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 12 17:24:47.297663 containerd[1693]: time="2025-12-12T17:24:47.297214059Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:47.300494 containerd[1693]: time="2025-12-12T17:24:47.300465755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:47.301361 containerd[1693]: time="2025-12-12T17:24:47.301332560Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.555385663s" Dec 12 17:24:47.301418 containerd[1693]: time="2025-12-12T17:24:47.301365920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 12 17:24:47.302048 containerd[1693]: time="2025-12-12T17:24:47.302020683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 17:24:48.524893 containerd[1693]: time="2025-12-12T17:24:48.524654851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:48.525884 containerd[1693]: time="2025-12-12T17:24:48.525673696Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 12 17:24:48.527734 containerd[1693]: time="2025-12-12T17:24:48.527703626Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:48.530637 containerd[1693]: time="2025-12-12T17:24:48.530600361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:48.532355 containerd[1693]: time="2025-12-12T17:24:48.532326330Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.230276887s" Dec 12 17:24:48.532463 containerd[1693]: time="2025-12-12T17:24:48.532447810Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 12 17:24:48.532922 containerd[1693]: time="2025-12-12T17:24:48.532885653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 17:24:49.481684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931664836.mount: Deactivated successfully. Dec 12 17:24:49.709925 containerd[1693]: time="2025-12-12T17:24:49.709876633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:49.712110 containerd[1693]: time="2025-12-12T17:24:49.712059284Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=18413667" Dec 12 17:24:49.713078 containerd[1693]: time="2025-12-12T17:24:49.713041169Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:49.715269 containerd[1693]: time="2025-12-12T17:24:49.715238220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:49.716201 containerd[1693]: time="2025-12-12T17:24:49.716169745Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.183238372s" Dec 12 17:24:49.716387 containerd[1693]: time="2025-12-12T17:24:49.716288026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 12 17:24:49.716841 containerd[1693]: time="2025-12-12T17:24:49.716809028Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 17:24:50.330921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221811856.mount: Deactivated successfully. Dec 12 17:24:50.599212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 17:24:50.601032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:24:50.740805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:24:50.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:50.743618 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 12 17:24:50.743689 kernel: audit: type=1130 audit(1765560290.740:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:50.746770 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:24:50.779583 kubelet[2320]: E1212 17:24:50.779527 2320 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:24:50.782198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:24:50.782429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:24:50.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:24:50.782853 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.2M memory peak. Dec 12 17:24:50.785862 kernel: audit: type=1131 audit(1765560290.781:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:24:51.422382 containerd[1693]: time="2025-12-12T17:24:51.422014612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:51.423905 containerd[1693]: time="2025-12-12T17:24:51.423849942Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Dec 12 17:24:51.425452 containerd[1693]: time="2025-12-12T17:24:51.424942267Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:51.428653 containerd[1693]: time="2025-12-12T17:24:51.428613606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:51.430446 containerd[1693]: time="2025-12-12T17:24:51.430415695Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.713577347s" Dec 12 17:24:51.430523 containerd[1693]: time="2025-12-12T17:24:51.430456655Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 12 17:24:51.430982 containerd[1693]: time="2025-12-12T17:24:51.430956018Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:24:51.950912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount382919613.mount: Deactivated successfully. Dec 12 17:24:51.958687 containerd[1693]: time="2025-12-12T17:24:51.958619539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:24:51.959701 containerd[1693]: time="2025-12-12T17:24:51.959647464Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:24:51.960947 containerd[1693]: time="2025-12-12T17:24:51.960890710Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:24:51.963114 containerd[1693]: time="2025-12-12T17:24:51.963074241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:24:51.963897 containerd[1693]: time="2025-12-12T17:24:51.963869646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 532.877307ms" Dec 12 17:24:51.963946 containerd[1693]: time="2025-12-12T17:24:51.963902966Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:24:51.964311 containerd[1693]: time="2025-12-12T17:24:51.964288088Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 17:24:52.781200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133791652.mount: Deactivated successfully. Dec 12 17:24:54.699690 containerd[1693]: time="2025-12-12T17:24:54.699623786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:54.701307 containerd[1693]: time="2025-12-12T17:24:54.701260834Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Dec 12 17:24:54.702492 containerd[1693]: time="2025-12-12T17:24:54.702440040Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:54.705501 containerd[1693]: time="2025-12-12T17:24:54.705103654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:24:54.706150 containerd[1693]: time="2025-12-12T17:24:54.706108299Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.741785091s" Dec 12 17:24:54.706219 containerd[1693]: time="2025-12-12T17:24:54.706153459Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 12 17:25:00.584791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:00.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:00.584968 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.2M memory peak. Dec 12 17:25:00.586756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:00.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:00.589395 kernel: audit: type=1130 audit(1765560300.584:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:00.589459 kernel: audit: type=1131 audit(1765560300.584:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:00.609526 systemd[1]: Reload requested from client PID 2422 ('systemctl') (unit session-7.scope)... Dec 12 17:25:00.609869 systemd[1]: Reloading... Dec 12 17:25:00.690869 zram_generator::config[2474]: No configuration found. Dec 12 17:25:00.714311 update_engine[1671]: I20251212 17:25:00.713873 1671 update_attempter.cc:509] Updating boot flags... Dec 12 17:25:01.153782 systemd[1]: Reloading finished in 543 ms. Dec 12 17:25:01.182795 kernel: audit: type=1334 audit(1765560301.179:296): prog-id=63 op=LOAD Dec 12 17:25:01.182920 kernel: audit: type=1334 audit(1765560301.179:297): prog-id=64 op=LOAD Dec 12 17:25:01.182947 kernel: audit: type=1334 audit(1765560301.179:298): prog-id=56 op=UNLOAD Dec 12 17:25:01.182969 kernel: audit: type=1334 audit(1765560301.179:299): prog-id=57 op=UNLOAD Dec 12 17:25:01.182994 kernel: audit: type=1334 audit(1765560301.180:300): prog-id=65 op=LOAD Dec 12 17:25:01.183016 kernel: audit: type=1334 audit(1765560301.180:301): prog-id=59 op=UNLOAD Dec 12 17:25:01.183036 kernel: audit: type=1334 audit(1765560301.182:302): prog-id=66 op=LOAD Dec 12 17:25:01.183053 kernel: audit: type=1334 audit(1765560301.182:303): prog-id=44 op=UNLOAD Dec 12 17:25:01.179000 audit: BPF prog-id=63 op=LOAD Dec 12 17:25:01.179000 audit: BPF prog-id=64 op=LOAD Dec 12 17:25:01.179000 audit: BPF prog-id=56 op=UNLOAD Dec 12 17:25:01.179000 audit: BPF prog-id=57 op=UNLOAD Dec 12 17:25:01.180000 audit: BPF prog-id=65 op=LOAD Dec 12 17:25:01.180000 audit: BPF prog-id=59 op=UNLOAD Dec 12 17:25:01.182000 audit: BPF prog-id=66 op=LOAD Dec 12 17:25:01.182000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:25:01.182000 audit: BPF prog-id=67 op=LOAD Dec 12 17:25:01.182000 audit: BPF prog-id=68 op=LOAD Dec 12 17:25:01.182000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:25:01.182000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:25:01.183000 audit: BPF prog-id=69 op=LOAD Dec 12 17:25:01.183000 audit: BPF prog-id=58 op=UNLOAD Dec 12 17:25:01.184000 audit: BPF prog-id=70 op=LOAD Dec 12 17:25:01.184000 audit: BPF prog-id=53 op=UNLOAD Dec 12 17:25:01.185000 audit: BPF prog-id=71 op=LOAD Dec 12 17:25:01.185000 audit: BPF prog-id=72 op=LOAD Dec 12 17:25:01.185000 audit: BPF prog-id=54 op=UNLOAD Dec 12 17:25:01.185000 audit: BPF prog-id=55 op=UNLOAD Dec 12 17:25:01.186000 audit: BPF prog-id=73 op=LOAD Dec 12 17:25:01.186000 audit: BPF prog-id=50 op=UNLOAD Dec 12 17:25:01.186000 audit: BPF prog-id=74 op=LOAD Dec 12 17:25:01.186000 audit: BPF prog-id=75 op=LOAD Dec 12 17:25:01.186000 audit: BPF prog-id=51 op=UNLOAD Dec 12 17:25:01.186000 audit: BPF prog-id=52 op=UNLOAD Dec 12 17:25:01.199000 audit: BPF prog-id=76 op=LOAD Dec 12 17:25:01.199000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:25:01.200000 audit: BPF prog-id=77 op=LOAD Dec 12 17:25:01.200000 audit: BPF prog-id=47 op=UNLOAD Dec 12 17:25:01.200000 audit: BPF prog-id=78 op=LOAD Dec 12 17:25:01.200000 audit: BPF prog-id=79 op=LOAD Dec 12 17:25:01.200000 audit: BPF prog-id=48 op=UNLOAD Dec 12 17:25:01.200000 audit: BPF prog-id=49 op=UNLOAD Dec 12 17:25:01.202000 audit: BPF prog-id=80 op=LOAD Dec 12 17:25:01.202000 audit: BPF prog-id=60 op=UNLOAD Dec 12 17:25:01.202000 audit: BPF prog-id=81 op=LOAD Dec 12 17:25:01.202000 audit: BPF prog-id=82 op=LOAD Dec 12 17:25:01.202000 audit: BPF prog-id=61 op=UNLOAD Dec 12 17:25:01.202000 audit: BPF prog-id=62 op=UNLOAD Dec 12 17:25:01.219737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:01.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:01.231406 (kubelet)[2524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:25:01.246396 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:01.251814 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:25:01.252068 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:01.252129 systemd[1]: kubelet.service: Consumed 115ms CPU time, 101.3M memory peak. Dec 12 17:25:01.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:01.255352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:01.649904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:01.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:01.654024 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:25:01.683460 kubelet[2541]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:25:01.683460 kubelet[2541]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:25:01.683460 kubelet[2541]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:25:01.683787 kubelet[2541]: I1212 17:25:01.683507 2541 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:25:02.350133 kubelet[2541]: I1212 17:25:02.350084 2541 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:25:02.350133 kubelet[2541]: I1212 17:25:02.350120 2541 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:25:02.350353 kubelet[2541]: I1212 17:25:02.350339 2541 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:25:02.374765 kubelet[2541]: E1212 17:25:02.374717 2541 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.7.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.7.100:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:25:02.375674 kubelet[2541]: I1212 17:25:02.375656 2541 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:25:02.386523 kubelet[2541]: I1212 17:25:02.386489 2541 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:25:02.389870 kubelet[2541]: I1212 17:25:02.389233 2541 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:25:02.389870 kubelet[2541]: I1212 17:25:02.389562 2541 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:25:02.389870 kubelet[2541]: I1212 17:25:02.389587 2541 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-4-1de611bfb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:25:02.389870 kubelet[2541]: I1212 17:25:02.389814 2541 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:25:02.390078 kubelet[2541]: I1212 17:25:02.389822 2541 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:25:02.390078 kubelet[2541]: I1212 17:25:02.390049 2541 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:25:02.394053 kubelet[2541]: I1212 17:25:02.394003 2541 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:25:02.394053 kubelet[2541]: I1212 17:25:02.394032 2541 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:25:02.394053 kubelet[2541]: I1212 17:25:02.394058 2541 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:25:02.394199 kubelet[2541]: I1212 17:25:02.394071 2541 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:25:02.395866 kubelet[2541]: I1212 17:25:02.395256 2541 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:25:02.396129 kubelet[2541]: I1212 17:25:02.396002 2541 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:25:02.396180 kubelet[2541]: W1212 17:25:02.396143 2541 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:25:02.400052 kubelet[2541]: I1212 17:25:02.400017 2541 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:25:02.400145 kubelet[2541]: I1212 17:25:02.400063 2541 server.go:1289] "Started kubelet" Dec 12 17:25:02.400779 kubelet[2541]: E1212 17:25:02.400741 2541 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.7.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-4-1de611bfb5&limit=500&resourceVersion=0\": dial tcp 10.0.7.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:25:02.401014 kubelet[2541]: E1212 17:25:02.400988 2541 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.7.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.7.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:25:02.401811 kubelet[2541]: I1212 17:25:02.401772 2541 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:25:02.402685 kubelet[2541]: I1212 17:25:02.401230 2541 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:25:02.404456 kubelet[2541]: I1212 17:25:02.403868 2541 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:25:02.404760 kubelet[2541]: I1212 17:25:02.404731 2541 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:25:02.408003 kubelet[2541]: I1212 17:25:02.407942 2541 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:25:02.408270 kubelet[2541]: I1212 17:25:02.408237 2541 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:25:02.408774 kubelet[2541]: E1212 17:25:02.408738 2541 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515-1-0-4-1de611bfb5\" not found" Dec 12 17:25:02.408931 kubelet[2541]: I1212 17:25:02.408898 2541 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:25:02.409364 kubelet[2541]: E1212 17:25:02.409317 2541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.7.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-4-1de611bfb5?timeout=10s\": dial tcp 10.0.7.100:6443: connect: connection refused" interval="200ms" Dec 12 17:25:02.409454 kubelet[2541]: I1212 17:25:02.409430 2541 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:25:02.409565 kubelet[2541]: I1212 17:25:02.409555 2541 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:25:02.410253 kubelet[2541]: E1212 17:25:02.410226 2541 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.7.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.7.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:25:02.410463 kubelet[2541]: I1212 17:25:02.410444 2541 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:25:02.410651 kubelet[2541]: I1212 17:25:02.410634 2541 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:25:02.412000 audit[2558]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:02.412000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc0622800 a2=0 a3=0 items=0 ppid=2541 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:25:02.413508 kubelet[2541]: I1212 17:25:02.412562 2541 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:25:02.414108 kubelet[2541]: E1212 17:25:02.414063 2541 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:25:02.413000 audit[2559]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:02.413000 audit[2559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc80ac4c0 a2=0 a3=0 items=0 ppid=2541 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:25:02.416651 kubelet[2541]: E1212 17:25:02.407885 2541 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.7.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.7.100:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515-1-0-4-1de611bfb5.188087bd9e15e317 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515-1-0-4-1de611bfb5,UID:ci-4515-1-0-4-1de611bfb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-4-1de611bfb5,},FirstTimestamp:2025-12-12 17:25:02.400037655 +0000 UTC m=+0.742895615,LastTimestamp:2025-12-12 17:25:02.400037655 +0000 UTC m=+0.742895615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-4-1de611bfb5,}" Dec 12 17:25:02.416000 audit[2562]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:02.416000 audit[2562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcb2279f0 a2=0 a3=0 items=0 ppid=2541 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:25:02.418000 audit[2565]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:02.418000 audit[2565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe8bd2920 a2=0 a3=0 items=0 ppid=2541 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:25:02.425000 audit[2568]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:02.425000 audit[2568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffed698f30 a2=0 a3=0 items=0 ppid=2541 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.425000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 17:25:02.426630 kubelet[2541]: I1212 17:25:02.426579 2541 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:25:02.426735 kubelet[2541]: I1212 17:25:02.426720 2541 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:25:02.426781 kubelet[2541]: I1212 17:25:02.426773 2541 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:25:02.426854 kubelet[2541]: I1212 17:25:02.426845 2541 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:25:02.426000 audit[2569]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:02.426000 audit[2569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffea666d70 a2=0 a3=0 items=0 ppid=2541 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:25:02.426000 audit[2570]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:02.426000 audit[2570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa28e1e0 a2=0 a3=0 items=0 ppid=2541 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:25:02.429020 kubelet[2541]: I1212 17:25:02.427888 2541 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:25:02.429020 kubelet[2541]: I1212 17:25:02.427917 2541 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:25:02.429020 kubelet[2541]: I1212 17:25:02.427933 2541 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:25:02.429020 kubelet[2541]: I1212 17:25:02.427941 2541 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:25:02.429020 kubelet[2541]: E1212 17:25:02.427985 2541 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:25:02.428000 audit[2571]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2571 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:02.428000 audit[2571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffd971340 a2=0 a3=0 items=0 ppid=2541 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:25:02.429316 kubelet[2541]: E1212 17:25:02.429242 2541 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.7.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.7.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:25:02.428000 audit[2572]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:02.428000 audit[2572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2291c30 a2=0 a3=0 items=0 ppid=2541 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:25:02.429000 audit[2573]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:02.429000 audit[2573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd223f8f0 a2=0 a3=0 items=0 ppid=2541 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.429000 audit[2574]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:02.429000 audit[2574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe8e1a10 a2=0 a3=0 items=0 ppid=2541 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:25:02.430952 kubelet[2541]: I1212 17:25:02.430916 2541 policy_none.go:49] "None policy: Start" Dec 12 17:25:02.430952 kubelet[2541]: I1212 17:25:02.430939 2541 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:25:02.430952 kubelet[2541]: I1212 17:25:02.430951 2541 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:25:02.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:25:02.431000 audit[2575]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:02.431000 audit[2575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5827c10 a2=0 a3=0 items=0 ppid=2541 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:25:02.436364 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:25:02.452039 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:25:02.465244 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:25:02.466682 kubelet[2541]: E1212 17:25:02.466659 2541 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:25:02.467102 kubelet[2541]: I1212 17:25:02.466870 2541 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:25:02.467102 kubelet[2541]: I1212 17:25:02.466887 2541 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:25:02.467643 kubelet[2541]: I1212 17:25:02.467623 2541 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:25:02.468346 kubelet[2541]: E1212 17:25:02.468321 2541 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:25:02.468469 kubelet[2541]: E1212 17:25:02.468455 2541 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515-1-0-4-1de611bfb5\" not found" Dec 12 17:25:02.539863 systemd[1]: Created slice kubepods-burstable-pod21e36c28d02d1478eb8d1c9c46311be1.slice - libcontainer container kubepods-burstable-pod21e36c28d02d1478eb8d1c9c46311be1.slice. Dec 12 17:25:02.552287 kubelet[2541]: E1212 17:25:02.552152 2541 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-4-1de611bfb5\" not found" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.554381 systemd[1]: Created slice kubepods-burstable-pod6d496e60638b37f8dc43e8d995ec6acf.slice - libcontainer container kubepods-burstable-pod6d496e60638b37f8dc43e8d995ec6acf.slice. Dec 12 17:25:02.556188 kubelet[2541]: E1212 17:25:02.556016 2541 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-4-1de611bfb5\" not found" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.558391 systemd[1]: Created slice kubepods-burstable-podc42d3836f11ede3333e58f8f4fc0d3eb.slice - libcontainer container kubepods-burstable-podc42d3836f11ede3333e58f8f4fc0d3eb.slice. Dec 12 17:25:02.560037 kubelet[2541]: E1212 17:25:02.560011 2541 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-4-1de611bfb5\" not found" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.568774 kubelet[2541]: I1212 17:25:02.568735 2541 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.569251 kubelet[2541]: E1212 17:25:02.569216 2541 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.7.100:6443/api/v1/nodes\": dial tcp 10.0.7.100:6443: connect: connection refused" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.610972 kubelet[2541]: I1212 17:25:02.610509 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21e36c28d02d1478eb8d1c9c46311be1-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-4-1de611bfb5\" (UID: \"21e36c28d02d1478eb8d1c9c46311be1\") " pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.611331 kubelet[2541]: E1212 17:25:02.611225 2541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.7.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-4-1de611bfb5?timeout=10s\": dial tcp 10.0.7.100:6443: connect: connection refused" interval="400ms" Dec 12 17:25:02.710769 kubelet[2541]: I1212 17:25:02.710695 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.710769 kubelet[2541]: I1212 17:25:02.710737 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.710769 kubelet[2541]: I1212 17:25:02.710761 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c42d3836f11ede3333e58f8f4fc0d3eb-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-4-1de611bfb5\" (UID: \"c42d3836f11ede3333e58f8f4fc0d3eb\") " pod="kube-system/kube-scheduler-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.711498 kubelet[2541]: I1212 17:25:02.710799 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21e36c28d02d1478eb8d1c9c46311be1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-4-1de611bfb5\" (UID: \"21e36c28d02d1478eb8d1c9c46311be1\") " pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.711498 kubelet[2541]: I1212 17:25:02.710863 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.711498 kubelet[2541]: I1212 17:25:02.710916 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.711498 kubelet[2541]: I1212 17:25:02.710964 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.711498 kubelet[2541]: I1212 17:25:02.711008 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21e36c28d02d1478eb8d1c9c46311be1-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-4-1de611bfb5\" (UID: \"21e36c28d02d1478eb8d1c9c46311be1\") " pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.771326 kubelet[2541]: I1212 17:25:02.771303 2541 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.771706 kubelet[2541]: E1212 17:25:02.771674 2541 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.7.100:6443/api/v1/nodes\": dial tcp 10.0.7.100:6443: connect: connection refused" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:02.854028 containerd[1693]: time="2025-12-12T17:25:02.853957442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-4-1de611bfb5,Uid:21e36c28d02d1478eb8d1c9c46311be1,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:02.859798 containerd[1693]: time="2025-12-12T17:25:02.859726271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-4-1de611bfb5,Uid:6d496e60638b37f8dc43e8d995ec6acf,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:02.862102 containerd[1693]: time="2025-12-12T17:25:02.862004123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-4-1de611bfb5,Uid:c42d3836f11ede3333e58f8f4fc0d3eb,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:02.893873 containerd[1693]: time="2025-12-12T17:25:02.893335002Z" level=info msg="connecting to shim add22b9ef730dff629ab5b6cd0cf73d37f129dd67e1f478b91d0e4ef2b4a6451" address="unix:///run/containerd/s/5f730e6d44ffac334cc43be89ac87a0e5c67bc4afbac368a3cf92a49a481528b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:02.895974 containerd[1693]: time="2025-12-12T17:25:02.895923135Z" level=info msg="connecting to shim 92a8119cfa66abdffe30f70b54fcc7b634f84c9ba90d695b2ac1ebe3edef9754" address="unix:///run/containerd/s/9f0d08f4d12e414605641589956e51bf7b5e3dfb78861689951c22a0c9e89e50" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:02.909019 containerd[1693]: time="2025-12-12T17:25:02.908871961Z" level=info msg="connecting to shim 3a3e7774dcaf57c074ce5844321092cc165bf0276d859f3b4c53a93cdc3ae2c8" address="unix:///run/containerd/s/228503e13fe7a9ea3773230c72b55e2c8bda747a6e2bc32dc1cd95957f1fd367" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:02.921109 systemd[1]: Started cri-containerd-92a8119cfa66abdffe30f70b54fcc7b634f84c9ba90d695b2ac1ebe3edef9754.scope - libcontainer container 92a8119cfa66abdffe30f70b54fcc7b634f84c9ba90d695b2ac1ebe3edef9754. Dec 12 17:25:02.923981 systemd[1]: Started cri-containerd-add22b9ef730dff629ab5b6cd0cf73d37f129dd67e1f478b91d0e4ef2b4a6451.scope - libcontainer container add22b9ef730dff629ab5b6cd0cf73d37f129dd67e1f478b91d0e4ef2b4a6451. Dec 12 17:25:02.932047 systemd[1]: Started cri-containerd-3a3e7774dcaf57c074ce5844321092cc165bf0276d859f3b4c53a93cdc3ae2c8.scope - libcontainer container 3a3e7774dcaf57c074ce5844321092cc165bf0276d859f3b4c53a93cdc3ae2c8. Dec 12 17:25:02.934000 audit: BPF prog-id=83 op=LOAD Dec 12 17:25:02.935000 audit: BPF prog-id=84 op=LOAD Dec 12 17:25:02.935000 audit[2623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2593 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613831313963666136366162646666653330663730623534666363 Dec 12 17:25:02.935000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:25:02.935000 audit[2623]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613831313963666136366162646666653330663730623534666363 Dec 12 17:25:02.935000 audit: BPF prog-id=85 op=LOAD Dec 12 17:25:02.935000 audit[2623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2593 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613831313963666136366162646666653330663730623534666363 Dec 12 17:25:02.935000 audit: BPF prog-id=86 op=LOAD Dec 12 17:25:02.935000 audit[2623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2593 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613831313963666136366162646666653330663730623534666363 Dec 12 17:25:02.936000 audit: BPF prog-id=86 op=UNLOAD Dec 12 17:25:02.936000 audit[2623]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613831313963666136366162646666653330663730623534666363 Dec 12 17:25:02.936000 audit: BPF prog-id=85 op=UNLOAD Dec 12 17:25:02.936000 audit[2623]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613831313963666136366162646666653330663730623534666363 Dec 12 17:25:02.936000 audit: BPF prog-id=87 op=LOAD Dec 12 17:25:02.936000 audit[2623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2593 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613831313963666136366162646666653330663730623534666363 Dec 12 17:25:02.938000 audit: BPF prog-id=88 op=LOAD Dec 12 17:25:02.939000 audit: BPF prog-id=89 op=LOAD Dec 12 17:25:02.939000 audit[2624]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=2584 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164643232623965663733306466663632396162356236636430636637 Dec 12 17:25:02.939000 audit: BPF prog-id=89 op=UNLOAD Dec 12 17:25:02.939000 audit[2624]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2584 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164643232623965663733306466663632396162356236636430636637 Dec 12 17:25:02.939000 audit: BPF prog-id=90 op=LOAD Dec 12 17:25:02.939000 audit[2624]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=2584 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164643232623965663733306466663632396162356236636430636637 Dec 12 17:25:02.939000 audit: BPF prog-id=91 op=LOAD Dec 12 17:25:02.939000 audit[2624]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=2584 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164643232623965663733306466663632396162356236636430636637 Dec 12 17:25:02.939000 audit: BPF prog-id=91 op=UNLOAD Dec 12 17:25:02.939000 audit[2624]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2584 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164643232623965663733306466663632396162356236636430636637 Dec 12 17:25:02.939000 audit: BPF prog-id=90 op=UNLOAD Dec 12 17:25:02.939000 audit[2624]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2584 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164643232623965663733306466663632396162356236636430636637 Dec 12 17:25:02.939000 audit: BPF prog-id=92 op=LOAD Dec 12 17:25:02.939000 audit[2624]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=2584 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164643232623965663733306466663632396162356236636430636637 Dec 12 17:25:02.944000 audit: BPF prog-id=93 op=LOAD Dec 12 17:25:02.945000 audit: BPF prog-id=94 op=LOAD Dec 12 17:25:02.945000 audit[2659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2626 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361336537373734646361663537633037346365353834343332313039 Dec 12 17:25:02.945000 audit: BPF prog-id=94 op=UNLOAD Dec 12 17:25:02.945000 audit[2659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361336537373734646361663537633037346365353834343332313039 Dec 12 17:25:02.945000 audit: BPF prog-id=95 op=LOAD Dec 12 17:25:02.945000 audit[2659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2626 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361336537373734646361663537633037346365353834343332313039 Dec 12 17:25:02.945000 audit: BPF prog-id=96 op=LOAD Dec 12 17:25:02.945000 audit[2659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2626 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361336537373734646361663537633037346365353834343332313039 Dec 12 17:25:02.945000 audit: BPF prog-id=96 op=UNLOAD Dec 12 17:25:02.945000 audit[2659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361336537373734646361663537633037346365353834343332313039 Dec 12 17:25:02.945000 audit: BPF prog-id=95 op=UNLOAD Dec 12 17:25:02.945000 audit[2659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361336537373734646361663537633037346365353834343332313039 Dec 12 17:25:02.945000 audit: BPF prog-id=97 op=LOAD Dec 12 17:25:02.945000 audit[2659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2626 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:02.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361336537373734646361663537633037346365353834343332313039 Dec 12 17:25:02.965898 containerd[1693]: time="2025-12-12T17:25:02.965814610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-4-1de611bfb5,Uid:c42d3836f11ede3333e58f8f4fc0d3eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"92a8119cfa66abdffe30f70b54fcc7b634f84c9ba90d695b2ac1ebe3edef9754\"" Dec 12 17:25:02.973085 containerd[1693]: time="2025-12-12T17:25:02.973032047Z" level=info msg="CreateContainer within sandbox \"92a8119cfa66abdffe30f70b54fcc7b634f84c9ba90d695b2ac1ebe3edef9754\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:25:02.975666 containerd[1693]: time="2025-12-12T17:25:02.975606780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-4-1de611bfb5,Uid:21e36c28d02d1478eb8d1c9c46311be1,Namespace:kube-system,Attempt:0,} returns sandbox id \"add22b9ef730dff629ab5b6cd0cf73d37f129dd67e1f478b91d0e4ef2b4a6451\"" Dec 12 17:25:02.980948 containerd[1693]: time="2025-12-12T17:25:02.980751486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-4-1de611bfb5,Uid:6d496e60638b37f8dc43e8d995ec6acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a3e7774dcaf57c074ce5844321092cc165bf0276d859f3b4c53a93cdc3ae2c8\"" Dec 12 17:25:02.982552 containerd[1693]: time="2025-12-12T17:25:02.982521175Z" level=info msg="CreateContainer within sandbox \"add22b9ef730dff629ab5b6cd0cf73d37f129dd67e1f478b91d0e4ef2b4a6451\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:25:02.987533 containerd[1693]: time="2025-12-12T17:25:02.987483480Z" level=info msg="CreateContainer within sandbox \"3a3e7774dcaf57c074ce5844321092cc165bf0276d859f3b4c53a93cdc3ae2c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:25:02.989840 containerd[1693]: time="2025-12-12T17:25:02.989792692Z" level=info msg="Container ddca1df084190377e7ff30732b3d2b1cd504b44cc5a51aa76d0d6eacae2935de: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:02.996748 containerd[1693]: time="2025-12-12T17:25:02.996694247Z" level=info msg="Container 175626a20981231ffc395ebe3f5f80a0d5f833cad1af824361f52216d6fc34c4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:03.000712 containerd[1693]: time="2025-12-12T17:25:03.000655267Z" level=info msg="CreateContainer within sandbox \"92a8119cfa66abdffe30f70b54fcc7b634f84c9ba90d695b2ac1ebe3edef9754\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ddca1df084190377e7ff30732b3d2b1cd504b44cc5a51aa76d0d6eacae2935de\"" Dec 12 17:25:03.002819 containerd[1693]: time="2025-12-12T17:25:03.002032274Z" level=info msg="Container 1176344162e9eb1e15f16d838a8b3d6bdeea3b38adbf736eddbe664bb11c4afa: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:03.002819 containerd[1693]: time="2025-12-12T17:25:03.002044474Z" level=info msg="StartContainer for \"ddca1df084190377e7ff30732b3d2b1cd504b44cc5a51aa76d0d6eacae2935de\"" Dec 12 17:25:03.003863 containerd[1693]: time="2025-12-12T17:25:03.003816923Z" level=info msg="connecting to shim ddca1df084190377e7ff30732b3d2b1cd504b44cc5a51aa76d0d6eacae2935de" address="unix:///run/containerd/s/9f0d08f4d12e414605641589956e51bf7b5e3dfb78861689951c22a0c9e89e50" protocol=ttrpc version=3 Dec 12 17:25:03.005893 containerd[1693]: time="2025-12-12T17:25:03.005828613Z" level=info msg="CreateContainer within sandbox \"add22b9ef730dff629ab5b6cd0cf73d37f129dd67e1f478b91d0e4ef2b4a6451\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"175626a20981231ffc395ebe3f5f80a0d5f833cad1af824361f52216d6fc34c4\"" Dec 12 17:25:03.006459 containerd[1693]: time="2025-12-12T17:25:03.006426336Z" level=info msg="StartContainer for \"175626a20981231ffc395ebe3f5f80a0d5f833cad1af824361f52216d6fc34c4\"" Dec 12 17:25:03.007563 containerd[1693]: time="2025-12-12T17:25:03.007527582Z" level=info msg="connecting to shim 175626a20981231ffc395ebe3f5f80a0d5f833cad1af824361f52216d6fc34c4" address="unix:///run/containerd/s/5f730e6d44ffac334cc43be89ac87a0e5c67bc4afbac368a3cf92a49a481528b" protocol=ttrpc version=3 Dec 12 17:25:03.013635 kubelet[2541]: E1212 17:25:03.013513 2541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.7.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-4-1de611bfb5?timeout=10s\": dial tcp 10.0.7.100:6443: connect: connection refused" interval="800ms" Dec 12 17:25:03.016583 containerd[1693]: time="2025-12-12T17:25:03.016519628Z" level=info msg="CreateContainer within sandbox \"3a3e7774dcaf57c074ce5844321092cc165bf0276d859f3b4c53a93cdc3ae2c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1176344162e9eb1e15f16d838a8b3d6bdeea3b38adbf736eddbe664bb11c4afa\"" Dec 12 17:25:03.018073 containerd[1693]: time="2025-12-12T17:25:03.017901915Z" level=info msg="StartContainer for \"1176344162e9eb1e15f16d838a8b3d6bdeea3b38adbf736eddbe664bb11c4afa\"" Dec 12 17:25:03.019755 containerd[1693]: time="2025-12-12T17:25:03.019714884Z" level=info msg="connecting to shim 1176344162e9eb1e15f16d838a8b3d6bdeea3b38adbf736eddbe664bb11c4afa" address="unix:///run/containerd/s/228503e13fe7a9ea3773230c72b55e2c8bda747a6e2bc32dc1cd95957f1fd367" protocol=ttrpc version=3 Dec 12 17:25:03.025068 systemd[1]: Started cri-containerd-ddca1df084190377e7ff30732b3d2b1cd504b44cc5a51aa76d0d6eacae2935de.scope - libcontainer container ddca1df084190377e7ff30732b3d2b1cd504b44cc5a51aa76d0d6eacae2935de. Dec 12 17:25:03.028958 systemd[1]: Started cri-containerd-175626a20981231ffc395ebe3f5f80a0d5f833cad1af824361f52216d6fc34c4.scope - libcontainer container 175626a20981231ffc395ebe3f5f80a0d5f833cad1af824361f52216d6fc34c4. Dec 12 17:25:03.048319 systemd[1]: Started cri-containerd-1176344162e9eb1e15f16d838a8b3d6bdeea3b38adbf736eddbe664bb11c4afa.scope - libcontainer container 1176344162e9eb1e15f16d838a8b3d6bdeea3b38adbf736eddbe664bb11c4afa. Dec 12 17:25:03.049000 audit: BPF prog-id=98 op=LOAD Dec 12 17:25:03.050000 audit: BPF prog-id=99 op=LOAD Dec 12 17:25:03.050000 audit[2713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2593 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464636131646630383431393033373765376666333037333262336432 Dec 12 17:25:03.050000 audit: BPF prog-id=99 op=UNLOAD Dec 12 17:25:03.050000 audit[2713]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464636131646630383431393033373765376666333037333262336432 Dec 12 17:25:03.051000 audit: BPF prog-id=100 op=LOAD Dec 12 17:25:03.051000 audit[2713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2593 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464636131646630383431393033373765376666333037333262336432 Dec 12 17:25:03.051000 audit: BPF prog-id=101 op=LOAD Dec 12 17:25:03.051000 audit[2713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2593 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464636131646630383431393033373765376666333037333262336432 Dec 12 17:25:03.051000 audit: BPF prog-id=101 op=UNLOAD Dec 12 17:25:03.051000 audit[2713]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464636131646630383431393033373765376666333037333262336432 Dec 12 17:25:03.051000 audit: BPF prog-id=100 op=UNLOAD Dec 12 17:25:03.051000 audit[2713]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464636131646630383431393033373765376666333037333262336432 Dec 12 17:25:03.051000 audit: BPF prog-id=102 op=LOAD Dec 12 17:25:03.051000 audit[2713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2593 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464636131646630383431393033373765376666333037333262336432 Dec 12 17:25:03.051000 audit: BPF prog-id=103 op=LOAD Dec 12 17:25:03.052000 audit: BPF prog-id=104 op=LOAD Dec 12 17:25:03.052000 audit[2719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2584 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353632366132303938313233316666633339356562653366356638 Dec 12 17:25:03.052000 audit: BPF prog-id=104 op=UNLOAD Dec 12 17:25:03.052000 audit[2719]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2584 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353632366132303938313233316666633339356562653366356638 Dec 12 17:25:03.052000 audit: BPF prog-id=105 op=LOAD Dec 12 17:25:03.052000 audit[2719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2584 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353632366132303938313233316666633339356562653366356638 Dec 12 17:25:03.052000 audit: BPF prog-id=106 op=LOAD Dec 12 17:25:03.052000 audit[2719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2584 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353632366132303938313233316666633339356562653366356638 Dec 12 17:25:03.052000 audit: BPF prog-id=106 op=UNLOAD Dec 12 17:25:03.052000 audit[2719]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2584 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353632366132303938313233316666633339356562653366356638 Dec 12 17:25:03.052000 audit: BPF prog-id=105 op=UNLOAD Dec 12 17:25:03.052000 audit[2719]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2584 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353632366132303938313233316666633339356562653366356638 Dec 12 17:25:03.052000 audit: BPF prog-id=107 op=LOAD Dec 12 17:25:03.052000 audit[2719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2584 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137353632366132303938313233316666633339356562653366356638 Dec 12 17:25:03.061000 audit: BPF prog-id=108 op=LOAD Dec 12 17:25:03.062000 audit: BPF prog-id=109 op=LOAD Dec 12 17:25:03.062000 audit[2736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2626 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373633343431363265396562316531356631366438333861386233 Dec 12 17:25:03.062000 audit: BPF prog-id=109 op=UNLOAD Dec 12 17:25:03.062000 audit[2736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373633343431363265396562316531356631366438333861386233 Dec 12 17:25:03.062000 audit: BPF prog-id=110 op=LOAD Dec 12 17:25:03.062000 audit[2736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2626 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373633343431363265396562316531356631366438333861386233 Dec 12 17:25:03.062000 audit: BPF prog-id=111 op=LOAD Dec 12 17:25:03.062000 audit[2736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2626 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373633343431363265396562316531356631366438333861386233 Dec 12 17:25:03.062000 audit: BPF prog-id=111 op=UNLOAD Dec 12 17:25:03.062000 audit[2736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373633343431363265396562316531356631366438333861386233 Dec 12 17:25:03.062000 audit: BPF prog-id=110 op=UNLOAD Dec 12 17:25:03.062000 audit[2736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373633343431363265396562316531356631366438333861386233 Dec 12 17:25:03.062000 audit: BPF prog-id=112 op=LOAD Dec 12 17:25:03.062000 audit[2736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2626 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373633343431363265396562316531356631366438333861386233 Dec 12 17:25:03.090149 containerd[1693]: time="2025-12-12T17:25:03.090096321Z" level=info msg="StartContainer for \"ddca1df084190377e7ff30732b3d2b1cd504b44cc5a51aa76d0d6eacae2935de\" returns successfully" Dec 12 17:25:03.090277 containerd[1693]: time="2025-12-12T17:25:03.090240202Z" level=info msg="StartContainer for \"175626a20981231ffc395ebe3f5f80a0d5f833cad1af824361f52216d6fc34c4\" returns successfully" Dec 12 17:25:03.101270 containerd[1693]: time="2025-12-12T17:25:03.101233018Z" level=info msg="StartContainer for \"1176344162e9eb1e15f16d838a8b3d6bdeea3b38adbf736eddbe664bb11c4afa\" returns successfully" Dec 12 17:25:03.174081 kubelet[2541]: I1212 17:25:03.173977 2541 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:03.178566 kubelet[2541]: E1212 17:25:03.178346 2541 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.7.100:6443/api/v1/nodes\": dial tcp 10.0.7.100:6443: connect: connection refused" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:03.438594 kubelet[2541]: E1212 17:25:03.438488 2541 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-4-1de611bfb5\" not found" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:03.442441 kubelet[2541]: E1212 17:25:03.442414 2541 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-4-1de611bfb5\" not found" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:03.446588 kubelet[2541]: E1212 17:25:03.446559 2541 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-4-1de611bfb5\" not found" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:03.980228 kubelet[2541]: I1212 17:25:03.980175 2541 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.350472 kubelet[2541]: E1212 17:25:04.350431 2541 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515-1-0-4-1de611bfb5\" not found" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.401285 kubelet[2541]: I1212 17:25:04.401249 2541 apiserver.go:52] "Watching apiserver" Dec 12 17:25:04.409565 kubelet[2541]: I1212 17:25:04.409530 2541 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:25:04.448188 kubelet[2541]: E1212 17:25:04.448143 2541 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-4-1de611bfb5\" not found" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.448320 kubelet[2541]: E1212 17:25:04.448210 2541 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-4-1de611bfb5\" not found" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.462091 kubelet[2541]: I1212 17:25:04.462049 2541 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.510092 kubelet[2541]: I1212 17:25:04.510047 2541 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.519838 kubelet[2541]: E1212 17:25:04.519215 2541 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-4-1de611bfb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.519838 kubelet[2541]: I1212 17:25:04.519251 2541 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.521515 kubelet[2541]: E1212 17:25:04.521484 2541 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.521515 kubelet[2541]: I1212 17:25:04.521516 2541 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:04.526102 kubelet[2541]: E1212 17:25:04.526069 2541 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-4-1de611bfb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:06.415170 systemd[1]: Reload requested from client PID 2823 ('systemctl') (unit session-7.scope)... Dec 12 17:25:06.415470 systemd[1]: Reloading... Dec 12 17:25:06.501875 zram_generator::config[2872]: No configuration found. Dec 12 17:25:06.686770 systemd[1]: Reloading finished in 270 ms. Dec 12 17:25:06.722252 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:06.734123 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:25:06.734398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:06.737092 kernel: kauditd_printk_skb: 203 callbacks suppressed Dec 12 17:25:06.737148 kernel: audit: type=1131 audit(1765560306.733:399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:06.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:06.734470 systemd[1]: kubelet.service: Consumed 1.153s CPU time, 130.4M memory peak. Dec 12 17:25:06.737102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:06.736000 audit: BPF prog-id=113 op=LOAD Dec 12 17:25:06.738556 kernel: audit: type=1334 audit(1765560306.736:400): prog-id=113 op=LOAD Dec 12 17:25:06.736000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:25:06.739416 kernel: audit: type=1334 audit(1765560306.736:401): prog-id=70 op=UNLOAD Dec 12 17:25:06.737000 audit: BPF prog-id=114 op=LOAD Dec 12 17:25:06.740249 kernel: audit: type=1334 audit(1765560306.737:402): prog-id=114 op=LOAD Dec 12 17:25:06.737000 audit: BPF prog-id=115 op=LOAD Dec 12 17:25:06.740997 kernel: audit: type=1334 audit(1765560306.737:403): prog-id=115 op=LOAD Dec 12 17:25:06.737000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:25:06.737000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:25:06.742509 kernel: audit: type=1334 audit(1765560306.737:404): prog-id=71 op=UNLOAD Dec 12 17:25:06.742553 kernel: audit: type=1334 audit(1765560306.737:405): prog-id=72 op=UNLOAD Dec 12 17:25:06.738000 audit: BPF prog-id=116 op=LOAD Dec 12 17:25:06.743313 kernel: audit: type=1334 audit(1765560306.738:406): prog-id=116 op=LOAD Dec 12 17:25:06.738000 audit: BPF prog-id=66 op=UNLOAD Dec 12 17:25:06.744041 kernel: audit: type=1334 audit(1765560306.738:407): prog-id=66 op=UNLOAD Dec 12 17:25:06.738000 audit: BPF prog-id=117 op=LOAD Dec 12 17:25:06.739000 audit: BPF prog-id=118 op=LOAD Dec 12 17:25:06.739000 audit: BPF prog-id=67 op=UNLOAD Dec 12 17:25:06.739000 audit: BPF prog-id=68 op=UNLOAD Dec 12 17:25:06.744841 kernel: audit: type=1334 audit(1765560306.738:408): prog-id=117 op=LOAD Dec 12 17:25:06.740000 audit: BPF prog-id=119 op=LOAD Dec 12 17:25:06.746000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:25:06.746000 audit: BPF prog-id=120 op=LOAD Dec 12 17:25:06.746000 audit: BPF prog-id=121 op=LOAD Dec 12 17:25:06.746000 audit: BPF prog-id=81 op=UNLOAD Dec 12 17:25:06.746000 audit: BPF prog-id=82 op=UNLOAD Dec 12 17:25:06.747000 audit: BPF prog-id=122 op=LOAD Dec 12 17:25:06.748000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:25:06.748000 audit: BPF prog-id=123 op=LOAD Dec 12 17:25:06.748000 audit: BPF prog-id=124 op=LOAD Dec 12 17:25:06.748000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:25:06.748000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:25:06.748000 audit: BPF prog-id=125 op=LOAD Dec 12 17:25:06.748000 audit: BPF prog-id=65 op=UNLOAD Dec 12 17:25:06.749000 audit: BPF prog-id=126 op=LOAD Dec 12 17:25:06.749000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:25:06.750000 audit: BPF prog-id=127 op=LOAD Dec 12 17:25:06.750000 audit: BPF prog-id=128 op=LOAD Dec 12 17:25:06.750000 audit: BPF prog-id=63 op=UNLOAD Dec 12 17:25:06.750000 audit: BPF prog-id=64 op=UNLOAD Dec 12 17:25:06.751000 audit: BPF prog-id=129 op=LOAD Dec 12 17:25:06.751000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:25:06.751000 audit: BPF prog-id=130 op=LOAD Dec 12 17:25:06.751000 audit: BPF prog-id=131 op=LOAD Dec 12 17:25:06.751000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:25:06.751000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:25:06.751000 audit: BPF prog-id=132 op=LOAD Dec 12 17:25:06.751000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:25:06.872383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:06.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:06.887241 (kubelet)[2914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:25:06.917715 kubelet[2914]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:25:06.917715 kubelet[2914]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:25:06.917715 kubelet[2914]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:25:06.918061 kubelet[2914]: I1212 17:25:06.917745 2914 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:25:06.923336 kubelet[2914]: I1212 17:25:06.923286 2914 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:25:06.923336 kubelet[2914]: I1212 17:25:06.923319 2914 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:25:06.923579 kubelet[2914]: I1212 17:25:06.923561 2914 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:25:06.925827 kubelet[2914]: I1212 17:25:06.925803 2914 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:25:06.928071 kubelet[2914]: I1212 17:25:06.928018 2914 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:25:06.931554 kubelet[2914]: I1212 17:25:06.931530 2914 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:25:06.934201 kubelet[2914]: I1212 17:25:06.934147 2914 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:25:06.934434 kubelet[2914]: I1212 17:25:06.934405 2914 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:25:06.934586 kubelet[2914]: I1212 17:25:06.934433 2914 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-4-1de611bfb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:25:06.934657 kubelet[2914]: I1212 17:25:06.934597 2914 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:25:06.934657 kubelet[2914]: I1212 17:25:06.934606 2914 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:25:06.934657 kubelet[2914]: I1212 17:25:06.934646 2914 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:25:06.934857 kubelet[2914]: I1212 17:25:06.934823 2914 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:25:06.934881 kubelet[2914]: I1212 17:25:06.934857 2914 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:25:06.934915 kubelet[2914]: I1212 17:25:06.934900 2914 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:25:06.934941 kubelet[2914]: I1212 17:25:06.934918 2914 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:25:06.936420 kubelet[2914]: I1212 17:25:06.936318 2914 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:25:06.938997 kubelet[2914]: I1212 17:25:06.937638 2914 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:25:06.946761 kubelet[2914]: I1212 17:25:06.946736 2914 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:25:06.946823 kubelet[2914]: I1212 17:25:06.946781 2914 server.go:1289] "Started kubelet" Dec 12 17:25:06.949764 kubelet[2914]: I1212 17:25:06.949737 2914 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:25:06.954134 kubelet[2914]: I1212 17:25:06.954108 2914 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:25:06.954366 kubelet[2914]: I1212 17:25:06.954353 2914 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:25:06.954547 kubelet[2914]: E1212 17:25:06.954439 2914 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:25:06.954651 kubelet[2914]: I1212 17:25:06.954509 2914 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:25:06.956414 kubelet[2914]: I1212 17:25:06.955759 2914 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:25:06.956414 kubelet[2914]: I1212 17:25:06.956368 2914 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:25:06.956737 kubelet[2914]: I1212 17:25:06.954549 2914 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:25:06.957190 kubelet[2914]: I1212 17:25:06.957159 2914 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:25:06.958791 kubelet[2914]: I1212 17:25:06.958754 2914 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:25:06.959251 kubelet[2914]: I1212 17:25:06.959224 2914 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:25:06.959940 kubelet[2914]: I1212 17:25:06.959920 2914 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:25:06.959940 kubelet[2914]: I1212 17:25:06.959935 2914 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:25:06.972185 kubelet[2914]: I1212 17:25:06.972143 2914 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:25:06.973624 kubelet[2914]: I1212 17:25:06.973594 2914 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:25:06.973624 kubelet[2914]: I1212 17:25:06.973621 2914 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:25:06.973732 kubelet[2914]: I1212 17:25:06.973641 2914 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:25:06.973732 kubelet[2914]: I1212 17:25:06.973647 2914 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:25:06.973732 kubelet[2914]: E1212 17:25:06.973688 2914 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:25:07.193962 kubelet[2914]: E1212 17:25:07.193267 2914 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:25:07.218239 kubelet[2914]: I1212 17:25:07.218211 2914 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:25:07.218239 kubelet[2914]: I1212 17:25:07.218229 2914 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:25:07.218239 kubelet[2914]: I1212 17:25:07.218249 2914 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:25:07.218404 kubelet[2914]: I1212 17:25:07.218373 2914 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:25:07.218404 kubelet[2914]: I1212 17:25:07.218383 2914 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:25:07.218404 kubelet[2914]: I1212 17:25:07.218400 2914 policy_none.go:49] "None policy: Start" Dec 12 17:25:07.218469 kubelet[2914]: I1212 17:25:07.218409 2914 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:25:07.218469 kubelet[2914]: I1212 17:25:07.218416 2914 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:25:07.218525 kubelet[2914]: I1212 17:25:07.218504 2914 state_mem.go:75] "Updated machine memory state" Dec 12 17:25:07.223262 kubelet[2914]: E1212 17:25:07.223215 2914 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:25:07.223651 kubelet[2914]: I1212 17:25:07.223637 2914 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:25:07.223698 kubelet[2914]: I1212 17:25:07.223651 2914 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:25:07.224039 kubelet[2914]: I1212 17:25:07.224003 2914 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:25:07.225012 kubelet[2914]: E1212 17:25:07.224901 2914 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:25:07.328322 kubelet[2914]: I1212 17:25:07.328287 2914 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.337344 kubelet[2914]: I1212 17:25:07.337024 2914 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.337344 kubelet[2914]: I1212 17:25:07.337115 2914 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.394385 kubelet[2914]: I1212 17:25:07.394334 2914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.394695 kubelet[2914]: I1212 17:25:07.394338 2914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.394695 kubelet[2914]: I1212 17:25:07.394348 2914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.491785 kubelet[2914]: I1212 17:25:07.491595 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.491785 kubelet[2914]: I1212 17:25:07.491700 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.491785 kubelet[2914]: I1212 17:25:07.491737 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.491785 kubelet[2914]: I1212 17:25:07.491758 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.491785 kubelet[2914]: I1212 17:25:07.491777 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c42d3836f11ede3333e58f8f4fc0d3eb-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-4-1de611bfb5\" (UID: \"c42d3836f11ede3333e58f8f4fc0d3eb\") " pod="kube-system/kube-scheduler-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.492202 kubelet[2914]: I1212 17:25:07.491792 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21e36c28d02d1478eb8d1c9c46311be1-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-4-1de611bfb5\" (UID: \"21e36c28d02d1478eb8d1c9c46311be1\") " pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.492202 kubelet[2914]: I1212 17:25:07.491809 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d496e60638b37f8dc43e8d995ec6acf-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" (UID: \"6d496e60638b37f8dc43e8d995ec6acf\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.492202 kubelet[2914]: I1212 17:25:07.491825 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21e36c28d02d1478eb8d1c9c46311be1-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-4-1de611bfb5\" (UID: \"21e36c28d02d1478eb8d1c9c46311be1\") " pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.492202 kubelet[2914]: I1212 17:25:07.491859 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21e36c28d02d1478eb8d1c9c46311be1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-4-1de611bfb5\" (UID: \"21e36c28d02d1478eb8d1c9c46311be1\") " pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:07.935404 kubelet[2914]: I1212 17:25:07.935329 2914 apiserver.go:52] "Watching apiserver" Dec 12 17:25:07.955312 kubelet[2914]: I1212 17:25:07.955240 2914 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:25:08.205649 kubelet[2914]: I1212 17:25:08.205536 2914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:08.205649 kubelet[2914]: I1212 17:25:08.205582 2914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:08.205825 kubelet[2914]: I1212 17:25:08.205806 2914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:08.212583 kubelet[2914]: E1212 17:25:08.211995 2914 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-4-1de611bfb5\" already exists" pod="kube-system/kube-scheduler-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:08.212583 kubelet[2914]: E1212 17:25:08.212045 2914 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-4-1de611bfb5\" already exists" pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:08.213429 kubelet[2914]: E1212 17:25:08.213347 2914 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-4-1de611bfb5\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:08.225504 kubelet[2914]: I1212 17:25:08.225315 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515-1-0-4-1de611bfb5" podStartSLOduration=1.2253008539999999 podStartE2EDuration="1.225300854s" podCreationTimestamp="2025-12-12 17:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:08.225199493 +0000 UTC m=+1.334052659" watchObservedRunningTime="2025-12-12 17:25:08.225300854 +0000 UTC m=+1.334154020" Dec 12 17:25:08.234612 kubelet[2914]: I1212 17:25:08.234359 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515-1-0-4-1de611bfb5" podStartSLOduration=1.23434198 podStartE2EDuration="1.23434198s" podCreationTimestamp="2025-12-12 17:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:08.234248259 +0000 UTC m=+1.343101425" watchObservedRunningTime="2025-12-12 17:25:08.23434198 +0000 UTC m=+1.343195146" Dec 12 17:25:08.242595 kubelet[2914]: I1212 17:25:08.242527 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515-1-0-4-1de611bfb5" podStartSLOduration=1.242512141 podStartE2EDuration="1.242512141s" podCreationTimestamp="2025-12-12 17:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:08.242077939 +0000 UTC m=+1.350931145" watchObservedRunningTime="2025-12-12 17:25:08.242512141 +0000 UTC m=+1.351365307" Dec 12 17:25:10.637442 kubelet[2914]: I1212 17:25:10.637401 2914 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:25:10.637803 containerd[1693]: time="2025-12-12T17:25:10.637761592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:25:10.638379 kubelet[2914]: I1212 17:25:10.637976 2914 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:25:11.502304 systemd[1]: Created slice kubepods-besteffort-pod099af375_f215_460e_a699_d2cfb39d0632.slice - libcontainer container kubepods-besteffort-pod099af375_f215_460e_a699_d2cfb39d0632.slice. Dec 12 17:25:11.617050 kubelet[2914]: I1212 17:25:11.617003 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/099af375-f215-460e-a699-d2cfb39d0632-xtables-lock\") pod \"kube-proxy-8k27k\" (UID: \"099af375-f215-460e-a699-d2cfb39d0632\") " pod="kube-system/kube-proxy-8k27k" Dec 12 17:25:11.617247 kubelet[2914]: I1212 17:25:11.617229 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/099af375-f215-460e-a699-d2cfb39d0632-kube-proxy\") pod \"kube-proxy-8k27k\" (UID: \"099af375-f215-460e-a699-d2cfb39d0632\") " pod="kube-system/kube-proxy-8k27k" Dec 12 17:25:11.617324 kubelet[2914]: I1212 17:25:11.617312 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/099af375-f215-460e-a699-d2cfb39d0632-lib-modules\") pod \"kube-proxy-8k27k\" (UID: \"099af375-f215-460e-a699-d2cfb39d0632\") " pod="kube-system/kube-proxy-8k27k" Dec 12 17:25:11.617412 kubelet[2914]: I1212 17:25:11.617395 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdt2z\" (UniqueName: \"kubernetes.io/projected/099af375-f215-460e-a699-d2cfb39d0632-kube-api-access-xdt2z\") pod \"kube-proxy-8k27k\" (UID: \"099af375-f215-460e-a699-d2cfb39d0632\") " pod="kube-system/kube-proxy-8k27k" Dec 12 17:25:11.813846 containerd[1693]: time="2025-12-12T17:25:11.813709887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8k27k,Uid:099af375-f215-460e-a699-d2cfb39d0632,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:11.846161 containerd[1693]: time="2025-12-12T17:25:11.846114212Z" level=info msg="connecting to shim 7ee6af1be25694a1c316c60f6fde4a05f343714f025191809337eec37c1ae51e" address="unix:///run/containerd/s/b8e6d99b01ff22cec4e6cc983442e98bad68e19d40191244ddb34e692652de20" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:11.877131 systemd[1]: Started cri-containerd-7ee6af1be25694a1c316c60f6fde4a05f343714f025191809337eec37c1ae51e.scope - libcontainer container 7ee6af1be25694a1c316c60f6fde4a05f343714f025191809337eec37c1ae51e. Dec 12 17:25:11.881901 systemd[1]: Created slice kubepods-besteffort-pod8c14ec5f_ea23_4fd4_8fb8_351d537305ba.slice - libcontainer container kubepods-besteffort-pod8c14ec5f_ea23_4fd4_8fb8_351d537305ba.slice. Dec 12 17:25:11.888000 audit: BPF prog-id=133 op=LOAD Dec 12 17:25:11.891128 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:25:11.891191 kernel: audit: type=1334 audit(1765560311.888:441): prog-id=133 op=LOAD Dec 12 17:25:11.890000 audit: BPF prog-id=134 op=LOAD Dec 12 17:25:11.892041 kernel: audit: type=1334 audit(1765560311.890:442): prog-id=134 op=LOAD Dec 12 17:25:11.890000 audit[2995]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.895048 kernel: audit: type=1300 audit(1765560311.890:442): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.899351 kernel: audit: type=1327 audit(1765560311.890:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.899445 kernel: audit: type=1334 audit(1765560311.891:443): prog-id=134 op=UNLOAD Dec 12 17:25:11.899464 kernel: audit: type=1300 audit(1765560311.891:443): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.891000 audit: BPF prog-id=134 op=UNLOAD Dec 12 17:25:11.891000 audit[2995]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.902431 kernel: audit: type=1327 audit(1765560311.891:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.894000 audit: BPF prog-id=135 op=LOAD Dec 12 17:25:11.905917 kernel: audit: type=1334 audit(1765560311.894:444): prog-id=135 op=LOAD Dec 12 17:25:11.894000 audit[2995]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.909026 kernel: audit: type=1300 audit(1765560311.894:444): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.912121 kernel: audit: type=1327 audit(1765560311.894:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.894000 audit: BPF prog-id=136 op=LOAD Dec 12 17:25:11.894000 audit[2995]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.894000 audit: BPF prog-id=136 op=UNLOAD Dec 12 17:25:11.894000 audit[2995]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.894000 audit: BPF prog-id=135 op=UNLOAD Dec 12 17:25:11.894000 audit[2995]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.894000 audit: BPF prog-id=137 op=LOAD Dec 12 17:25:11.894000 audit[2995]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2984 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:11.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765653661663162653235363934613163333136633630663666646534 Dec 12 17:25:11.920549 kubelet[2914]: I1212 17:25:11.920495 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2dg4\" (UniqueName: \"kubernetes.io/projected/8c14ec5f-ea23-4fd4-8fb8-351d537305ba-kube-api-access-r2dg4\") pod \"tigera-operator-7dcd859c48-sb8b5\" (UID: \"8c14ec5f-ea23-4fd4-8fb8-351d537305ba\") " pod="tigera-operator/tigera-operator-7dcd859c48-sb8b5" Dec 12 17:25:11.920991 kubelet[2914]: I1212 17:25:11.920565 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c14ec5f-ea23-4fd4-8fb8-351d537305ba-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sb8b5\" (UID: \"8c14ec5f-ea23-4fd4-8fb8-351d537305ba\") " pod="tigera-operator/tigera-operator-7dcd859c48-sb8b5" Dec 12 17:25:11.925036 containerd[1693]: time="2025-12-12T17:25:11.924930052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8k27k,Uid:099af375-f215-460e-a699-d2cfb39d0632,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ee6af1be25694a1c316c60f6fde4a05f343714f025191809337eec37c1ae51e\"" Dec 12 17:25:11.931227 containerd[1693]: time="2025-12-12T17:25:11.931189524Z" level=info msg="CreateContainer within sandbox \"7ee6af1be25694a1c316c60f6fde4a05f343714f025191809337eec37c1ae51e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:25:11.943960 containerd[1693]: time="2025-12-12T17:25:11.943572027Z" level=info msg="Container 73e5e521cb37c74ef93ef8f5ce08444c15a409f4640adc59cd5d8893c899a8d7: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:11.951464 containerd[1693]: time="2025-12-12T17:25:11.951398147Z" level=info msg="CreateContainer within sandbox \"7ee6af1be25694a1c316c60f6fde4a05f343714f025191809337eec37c1ae51e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73e5e521cb37c74ef93ef8f5ce08444c15a409f4640adc59cd5d8893c899a8d7\"" Dec 12 17:25:11.952105 containerd[1693]: time="2025-12-12T17:25:11.952080390Z" level=info msg="StartContainer for \"73e5e521cb37c74ef93ef8f5ce08444c15a409f4640adc59cd5d8893c899a8d7\"" Dec 12 17:25:11.954260 containerd[1693]: time="2025-12-12T17:25:11.953705118Z" level=info msg="connecting to shim 73e5e521cb37c74ef93ef8f5ce08444c15a409f4640adc59cd5d8893c899a8d7" address="unix:///run/containerd/s/b8e6d99b01ff22cec4e6cc983442e98bad68e19d40191244ddb34e692652de20" protocol=ttrpc version=3 Dec 12 17:25:11.977117 systemd[1]: Started cri-containerd-73e5e521cb37c74ef93ef8f5ce08444c15a409f4640adc59cd5d8893c899a8d7.scope - libcontainer container 73e5e521cb37c74ef93ef8f5ce08444c15a409f4640adc59cd5d8893c899a8d7. Dec 12 17:25:12.035000 audit: BPF prog-id=138 op=LOAD Dec 12 17:25:12.035000 audit[3020]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2984 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653565353231636233376337346566393365663866356365303834 Dec 12 17:25:12.035000 audit: BPF prog-id=139 op=LOAD Dec 12 17:25:12.035000 audit[3020]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2984 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653565353231636233376337346566393365663866356365303834 Dec 12 17:25:12.035000 audit: BPF prog-id=139 op=UNLOAD Dec 12 17:25:12.035000 audit[3020]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2984 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653565353231636233376337346566393365663866356365303834 Dec 12 17:25:12.035000 audit: BPF prog-id=138 op=UNLOAD Dec 12 17:25:12.035000 audit[3020]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2984 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653565353231636233376337346566393365663866356365303834 Dec 12 17:25:12.035000 audit: BPF prog-id=140 op=LOAD Dec 12 17:25:12.035000 audit[3020]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2984 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653565353231636233376337346566393365663866356365303834 Dec 12 17:25:12.056923 containerd[1693]: time="2025-12-12T17:25:12.056873243Z" level=info msg="StartContainer for \"73e5e521cb37c74ef93ef8f5ce08444c15a409f4640adc59cd5d8893c899a8d7\" returns successfully" Dec 12 17:25:12.186086 containerd[1693]: time="2025-12-12T17:25:12.186034179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sb8b5,Uid:8c14ec5f-ea23-4fd4-8fb8-351d537305ba,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:25:12.217077 containerd[1693]: time="2025-12-12T17:25:12.217027576Z" level=info msg="connecting to shim 03be7902cdf298c5229227313d87960b5593c7a8ffc6e5f179dd56212b26f432" address="unix:///run/containerd/s/fa626b4f867d7a809365e5c49338f2c4b531d56e3a4555d50b5286c3e914c185" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:12.227384 kubelet[2914]: I1212 17:25:12.227330 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8k27k" podStartSLOduration=1.2273145890000001 podStartE2EDuration="1.227314589s" podCreationTimestamp="2025-12-12 17:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:12.227262308 +0000 UTC m=+5.336115474" watchObservedRunningTime="2025-12-12 17:25:12.227314589 +0000 UTC m=+5.336167715" Dec 12 17:25:12.238000 audit[3121]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.238000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb579970 a2=0 a3=1 items=0 ppid=3033 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:25:12.238000 audit[3122]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.238000 audit[3122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe31fca20 a2=0 a3=1 items=0 ppid=3033 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:25:12.241000 audit[3126]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.241000 audit[3126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecf51990 a2=0 a3=1 items=0 ppid=3033 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:25:12.244000 audit[3125]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.244000 audit[3125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2e60a90 a2=0 a3=1 items=0 ppid=3033 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.244000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:25:12.244000 audit[3128]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.244000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca3f6430 a2=0 a3=1 items=0 ppid=3033 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:25:12.247000 audit[3129]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.247000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe6ca5f00 a2=0 a3=1 items=0 ppid=3033 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:25:12.251337 systemd[1]: Started cri-containerd-03be7902cdf298c5229227313d87960b5593c7a8ffc6e5f179dd56212b26f432.scope - libcontainer container 03be7902cdf298c5229227313d87960b5593c7a8ffc6e5f179dd56212b26f432. Dec 12 17:25:12.259000 audit: BPF prog-id=141 op=LOAD Dec 12 17:25:12.259000 audit: BPF prog-id=142 op=LOAD Dec 12 17:25:12.259000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3087 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626537393032636466323938633532323932323733313364383739 Dec 12 17:25:12.259000 audit: BPF prog-id=142 op=UNLOAD Dec 12 17:25:12.259000 audit[3104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626537393032636466323938633532323932323733313364383739 Dec 12 17:25:12.259000 audit: BPF prog-id=143 op=LOAD Dec 12 17:25:12.259000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3087 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626537393032636466323938633532323932323733313364383739 Dec 12 17:25:12.259000 audit: BPF prog-id=144 op=LOAD Dec 12 17:25:12.259000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3087 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626537393032636466323938633532323932323733313364383739 Dec 12 17:25:12.259000 audit: BPF prog-id=144 op=UNLOAD Dec 12 17:25:12.259000 audit[3104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626537393032636466323938633532323932323733313364383739 Dec 12 17:25:12.259000 audit: BPF prog-id=143 op=UNLOAD Dec 12 17:25:12.259000 audit[3104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626537393032636466323938633532323932323733313364383739 Dec 12 17:25:12.259000 audit: BPF prog-id=145 op=LOAD Dec 12 17:25:12.259000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3087 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033626537393032636466323938633532323932323733313364383739 Dec 12 17:25:12.288108 containerd[1693]: time="2025-12-12T17:25:12.288061977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sb8b5,Uid:8c14ec5f-ea23-4fd4-8fb8-351d537305ba,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"03be7902cdf298c5229227313d87960b5593c7a8ffc6e5f179dd56212b26f432\"" Dec 12 17:25:12.289577 containerd[1693]: time="2025-12-12T17:25:12.289553665Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:25:12.341000 audit[3143]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.341000 audit[3143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff13d9cf0 a2=0 a3=1 items=0 ppid=3033 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:25:12.344000 audit[3145]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.344000 audit[3145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff51c9240 a2=0 a3=1 items=0 ppid=3033 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 17:25:12.347000 audit[3148]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.347000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd15783a0 a2=0 a3=1 items=0 ppid=3033 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.347000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 17:25:12.348000 audit[3149]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.348000 audit[3149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb30b380 a2=0 a3=1 items=0 ppid=3033 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:25:12.351000 audit[3151]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.351000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffb295430 a2=0 a3=1 items=0 ppid=3033 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:25:12.353000 audit[3152]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.353000 audit[3152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1ce44b0 a2=0 a3=1 items=0 ppid=3033 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:25:12.354000 audit[3154]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.354000 audit[3154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdeaa0a60 a2=0 a3=1 items=0 ppid=3033 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:25:12.358000 audit[3157]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.358000 audit[3157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0eabf70 a2=0 a3=1 items=0 ppid=3033 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 17:25:12.359000 audit[3158]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.359000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdafe38c0 a2=0 a3=1 items=0 ppid=3033 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.359000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:25:12.361000 audit[3160]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.361000 audit[3160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd53a7900 a2=0 a3=1 items=0 ppid=3033 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:25:12.362000 audit[3161]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.362000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff010960 a2=0 a3=1 items=0 ppid=3033 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:25:12.366000 audit[3163]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.366000 audit[3163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd06bdc30 a2=0 a3=1 items=0 ppid=3033 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:25:12.371000 audit[3166]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.371000 audit[3166]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff062e920 a2=0 a3=1 items=0 ppid=3033 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:25:12.374000 audit[3169]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.374000 audit[3169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeedf02a0 a2=0 a3=1 items=0 ppid=3033 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.374000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:25:12.375000 audit[3170]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.375000 audit[3170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffed0ae6e0 a2=0 a3=1 items=0 ppid=3033 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.375000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:25:12.377000 audit[3172]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.377000 audit[3172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff0ec9990 a2=0 a3=1 items=0 ppid=3033 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.377000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:25:12.381000 audit[3175]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.381000 audit[3175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffff13f020 a2=0 a3=1 items=0 ppid=3033 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:25:12.382000 audit[3176]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.382000 audit[3176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc15db950 a2=0 a3=1 items=0 ppid=3033 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:25:12.384000 audit[3178]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:12.384000 audit[3178]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffffb42730 a2=0 a3=1 items=0 ppid=3033 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:25:12.407000 audit[3184]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:12.407000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffce05b620 a2=0 a3=1 items=0 ppid=3033 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:12.418000 audit[3184]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:12.418000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffce05b620 a2=0 a3=1 items=0 ppid=3033 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.418000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:12.420000 audit[3189]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.420000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdf8921a0 a2=0 a3=1 items=0 ppid=3033 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.420000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:25:12.422000 audit[3191]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.422000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffda903b30 a2=0 a3=1 items=0 ppid=3033 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 17:25:12.427000 audit[3194]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.427000 audit[3194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffec436bc0 a2=0 a3=1 items=0 ppid=3033 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 17:25:12.427000 audit[3195]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.427000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff36c72c0 a2=0 a3=1 items=0 ppid=3033 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:25:12.429000 audit[3197]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.429000 audit[3197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff47ecdc0 a2=0 a3=1 items=0 ppid=3033 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:25:12.430000 audit[3198]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.430000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffda9b220 a2=0 a3=1 items=0 ppid=3033 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:25:12.433000 audit[3200]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.433000 audit[3200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeb15bb30 a2=0 a3=1 items=0 ppid=3033 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 17:25:12.436000 audit[3203]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.436000 audit[3203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffee5692e0 a2=0 a3=1 items=0 ppid=3033 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:25:12.437000 audit[3204]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.437000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef644020 a2=0 a3=1 items=0 ppid=3033 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.437000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:25:12.441000 audit[3206]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.441000 audit[3206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe9aad2e0 a2=0 a3=1 items=0 ppid=3033 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:25:12.441000 audit[3207]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.441000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff67bc180 a2=0 a3=1 items=0 ppid=3033 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:25:12.445000 audit[3209]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.445000 audit[3209]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff0b875d0 a2=0 a3=1 items=0 ppid=3033 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.445000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:25:12.447000 audit[3212]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.447000 audit[3212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe33b9c00 a2=0 a3=1 items=0 ppid=3033 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.447000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:25:12.451000 audit[3215]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.451000 audit[3215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff4ca1950 a2=0 a3=1 items=0 ppid=3033 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 17:25:12.452000 audit[3216]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.452000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe66bfdc0 a2=0 a3=1 items=0 ppid=3033 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:25:12.454000 audit[3218]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.454000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd7dba090 a2=0 a3=1 items=0 ppid=3033 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:25:12.457000 audit[3221]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.457000 audit[3221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffea4da450 a2=0 a3=1 items=0 ppid=3033 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.457000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:25:12.459000 audit[3222]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.459000 audit[3222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2f88bb0 a2=0 a3=1 items=0 ppid=3033 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.459000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:25:12.461000 audit[3224]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.461000 audit[3224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc0409720 a2=0 a3=1 items=0 ppid=3033 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:25:12.462000 audit[3225]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.462000 audit[3225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff9a5e10 a2=0 a3=1 items=0 ppid=3033 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:25:12.464000 audit[3227]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.464000 audit[3227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff5e6aa60 a2=0 a3=1 items=0 ppid=3033 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:25:12.468000 audit[3230]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:12.468000 audit[3230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe1784810 a2=0 a3=1 items=0 ppid=3033 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:25:12.471000 audit[3232]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:25:12.471000 audit[3232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffde7d20a0 a2=0 a3=1 items=0 ppid=3033 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.471000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:12.471000 audit[3232]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:25:12.471000 audit[3232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffde7d20a0 a2=0 a3=1 items=0 ppid=3033 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:12.471000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:14.281735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653754701.mount: Deactivated successfully. Dec 12 17:25:14.698388 containerd[1693]: time="2025-12-12T17:25:14.698326544Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:14.699459 containerd[1693]: time="2025-12-12T17:25:14.699407470Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 12 17:25:14.700732 containerd[1693]: time="2025-12-12T17:25:14.700691596Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:14.705948 containerd[1693]: time="2025-12-12T17:25:14.705629262Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:14.706390 containerd[1693]: time="2025-12-12T17:25:14.706355985Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.41677496s" Dec 12 17:25:14.706472 containerd[1693]: time="2025-12-12T17:25:14.706456866Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:25:14.712203 containerd[1693]: time="2025-12-12T17:25:14.712161815Z" level=info msg="CreateContainer within sandbox \"03be7902cdf298c5229227313d87960b5593c7a8ffc6e5f179dd56212b26f432\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:25:14.724775 containerd[1693]: time="2025-12-12T17:25:14.724721199Z" level=info msg="Container bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:14.724955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4094498306.mount: Deactivated successfully. Dec 12 17:25:14.731947 containerd[1693]: time="2025-12-12T17:25:14.731893515Z" level=info msg="CreateContainer within sandbox \"03be7902cdf298c5229227313d87960b5593c7a8ffc6e5f179dd56212b26f432\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596\"" Dec 12 17:25:14.732620 containerd[1693]: time="2025-12-12T17:25:14.732500078Z" level=info msg="StartContainer for \"bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596\"" Dec 12 17:25:14.733442 containerd[1693]: time="2025-12-12T17:25:14.733399563Z" level=info msg="connecting to shim bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596" address="unix:///run/containerd/s/fa626b4f867d7a809365e5c49338f2c4b531d56e3a4555d50b5286c3e914c185" protocol=ttrpc version=3 Dec 12 17:25:14.762039 systemd[1]: Started cri-containerd-bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596.scope - libcontainer container bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596. Dec 12 17:25:14.771000 audit: BPF prog-id=146 op=LOAD Dec 12 17:25:14.771000 audit: BPF prog-id=147 op=LOAD Dec 12 17:25:14.771000 audit[3241]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3087 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:14.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262356565306138393562663933613733663032363566643966376638 Dec 12 17:25:14.771000 audit: BPF prog-id=147 op=UNLOAD Dec 12 17:25:14.771000 audit[3241]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:14.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262356565306138393562663933613733663032363566643966376638 Dec 12 17:25:14.772000 audit: BPF prog-id=148 op=LOAD Dec 12 17:25:14.772000 audit[3241]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3087 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:14.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262356565306138393562663933613733663032363566643966376638 Dec 12 17:25:14.772000 audit: BPF prog-id=149 op=LOAD Dec 12 17:25:14.772000 audit[3241]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3087 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:14.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262356565306138393562663933613733663032363566643966376638 Dec 12 17:25:14.772000 audit: BPF prog-id=149 op=UNLOAD Dec 12 17:25:14.772000 audit[3241]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:14.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262356565306138393562663933613733663032363566643966376638 Dec 12 17:25:14.772000 audit: BPF prog-id=148 op=UNLOAD Dec 12 17:25:14.772000 audit[3241]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:14.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262356565306138393562663933613733663032363566643966376638 Dec 12 17:25:14.772000 audit: BPF prog-id=150 op=LOAD Dec 12 17:25:14.772000 audit[3241]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3087 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:14.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262356565306138393562663933613733663032363566643966376638 Dec 12 17:25:14.788815 containerd[1693]: time="2025-12-12T17:25:14.788780644Z" level=info msg="StartContainer for \"bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596\" returns successfully" Dec 12 17:25:15.232106 kubelet[2914]: I1212 17:25:15.231705 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sb8b5" podStartSLOduration=1.813402607 podStartE2EDuration="4.231684495s" podCreationTimestamp="2025-12-12 17:25:11 +0000 UTC" firstStartedPulling="2025-12-12 17:25:12.289250383 +0000 UTC m=+5.398103549" lastFinishedPulling="2025-12-12 17:25:14.707532271 +0000 UTC m=+7.816385437" observedRunningTime="2025-12-12 17:25:15.231152972 +0000 UTC m=+8.340006138" watchObservedRunningTime="2025-12-12 17:25:15.231684495 +0000 UTC m=+8.340537701" Dec 12 17:25:16.765656 systemd[1]: cri-containerd-bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596.scope: Deactivated successfully. Dec 12 17:25:16.768129 containerd[1693]: time="2025-12-12T17:25:16.767996821Z" level=info msg="received container exit event container_id:\"bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596\" id:\"bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596\" pid:3254 exit_status:1 exited_at:{seconds:1765560316 nanos:767559099}" Dec 12 17:25:16.772000 audit: BPF prog-id=146 op=UNLOAD Dec 12 17:25:16.772000 audit: BPF prog-id=150 op=UNLOAD Dec 12 17:25:16.801766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596-rootfs.mount: Deactivated successfully. Dec 12 17:25:20.058259 sudo[1953]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:20.059668 kernel: kauditd_printk_skb: 226 callbacks suppressed Dec 12 17:25:20.059709 kernel: audit: type=1106 audit(1765560320.056:523): pid=1953 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:20.056000 audit[1953]: USER_END pid=1953 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:20.056000 audit[1953]: CRED_DISP pid=1953 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:20.063856 kernel: audit: type=1104 audit(1765560320.056:524): pid=1953 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:20.212865 sshd[1952]: Connection closed by 139.178.89.65 port 51800 Dec 12 17:25:20.213076 sshd-session[1933]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:20.213000 audit[1933]: USER_END pid=1933 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:20.218344 systemd[1]: sshd@6-10.0.7.100:22-139.178.89.65:51800.service: Deactivated successfully. Dec 12 17:25:20.213000 audit[1933]: CRED_DISP pid=1933 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:20.220171 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:25:20.220398 systemd[1]: session-7.scope: Consumed 7.310s CPU time, 221.2M memory peak. Dec 12 17:25:20.221362 kernel: audit: type=1106 audit(1765560320.213:525): pid=1933 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:20.221443 kernel: audit: type=1104 audit(1765560320.213:526): pid=1933 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:20.221498 kernel: audit: type=1131 audit(1765560320.216:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.7.100:22-139.178.89.65:51800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:20.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.7.100:22-139.178.89.65:51800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:20.222023 systemd-logind[1668]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:25:20.222923 systemd-logind[1668]: Removed session 7. Dec 12 17:25:20.232379 kubelet[2914]: I1212 17:25:20.232264 2914 scope.go:117] "RemoveContainer" containerID="bb5ee0a895bf93a73f0265fd9f7f898632b723ea1b03240aa154fedad2b44596" Dec 12 17:25:20.235425 containerd[1693]: time="2025-12-12T17:25:20.235342799Z" level=info msg="CreateContainer within sandbox \"03be7902cdf298c5229227313d87960b5593c7a8ffc6e5f179dd56212b26f432\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 17:25:20.249488 containerd[1693]: time="2025-12-12T17:25:20.249426351Z" level=info msg="Container 5b4baeeb772c07b482488f7744d61a0239775ea289858c3fde180379243a2487: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:20.259885 containerd[1693]: time="2025-12-12T17:25:20.259688243Z" level=info msg="CreateContainer within sandbox \"03be7902cdf298c5229227313d87960b5593c7a8ffc6e5f179dd56212b26f432\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5b4baeeb772c07b482488f7744d61a0239775ea289858c3fde180379243a2487\"" Dec 12 17:25:20.261228 containerd[1693]: time="2025-12-12T17:25:20.261193490Z" level=info msg="StartContainer for \"5b4baeeb772c07b482488f7744d61a0239775ea289858c3fde180379243a2487\"" Dec 12 17:25:20.262073 containerd[1693]: time="2025-12-12T17:25:20.262003735Z" level=info msg="connecting to shim 5b4baeeb772c07b482488f7744d61a0239775ea289858c3fde180379243a2487" address="unix:///run/containerd/s/fa626b4f867d7a809365e5c49338f2c4b531d56e3a4555d50b5286c3e914c185" protocol=ttrpc version=3 Dec 12 17:25:20.283031 systemd[1]: Started cri-containerd-5b4baeeb772c07b482488f7744d61a0239775ea289858c3fde180379243a2487.scope - libcontainer container 5b4baeeb772c07b482488f7744d61a0239775ea289858c3fde180379243a2487. Dec 12 17:25:20.291000 audit: BPF prog-id=151 op=LOAD Dec 12 17:25:20.291000 audit: BPF prog-id=152 op=LOAD Dec 12 17:25:20.291000 audit[3340]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3087 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:20.297590 kernel: audit: type=1334 audit(1765560320.291:528): prog-id=151 op=LOAD Dec 12 17:25:20.297656 kernel: audit: type=1334 audit(1765560320.291:529): prog-id=152 op=LOAD Dec 12 17:25:20.297675 kernel: audit: type=1300 audit(1765560320.291:529): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3087 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:20.297694 kernel: audit: type=1327 audit(1765560320.291:529): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562346261656562373732633037623438323438386637373434643631 Dec 12 17:25:20.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562346261656562373732633037623438323438386637373434643631 Dec 12 17:25:20.292000 audit: BPF prog-id=152 op=UNLOAD Dec 12 17:25:20.301342 kernel: audit: type=1334 audit(1765560320.292:530): prog-id=152 op=UNLOAD Dec 12 17:25:20.292000 audit[3340]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:20.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562346261656562373732633037623438323438386637373434643631 Dec 12 17:25:20.292000 audit: BPF prog-id=153 op=LOAD Dec 12 17:25:20.292000 audit[3340]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3087 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:20.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562346261656562373732633037623438323438386637373434643631 Dec 12 17:25:20.293000 audit: BPF prog-id=154 op=LOAD Dec 12 17:25:20.293000 audit[3340]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3087 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:20.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562346261656562373732633037623438323438386637373434643631 Dec 12 17:25:20.298000 audit: BPF prog-id=154 op=UNLOAD Dec 12 17:25:20.298000 audit[3340]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:20.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562346261656562373732633037623438323438386637373434643631 Dec 12 17:25:20.298000 audit: BPF prog-id=153 op=UNLOAD Dec 12 17:25:20.298000 audit[3340]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:20.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562346261656562373732633037623438323438386637373434643631 Dec 12 17:25:20.298000 audit: BPF prog-id=155 op=LOAD Dec 12 17:25:20.298000 audit[3340]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3087 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:20.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562346261656562373732633037623438323438386637373434643631 Dec 12 17:25:20.320424 containerd[1693]: time="2025-12-12T17:25:20.320384751Z" level=info msg="StartContainer for \"5b4baeeb772c07b482488f7744d61a0239775ea289858c3fde180379243a2487\" returns successfully" Dec 12 17:25:24.703000 audit[3376]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:24.703000 audit[3376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe9140ee0 a2=0 a3=1 items=0 ppid=3033 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:24.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:24.707000 audit[3376]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:24.707000 audit[3376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe9140ee0 a2=0 a3=1 items=0 ppid=3033 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:24.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:24.729000 audit[3378]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:24.729000 audit[3378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff8ad0230 a2=0 a3=1 items=0 ppid=3033 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:24.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:24.735000 audit[3378]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:24.735000 audit[3378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff8ad0230 a2=0 a3=1 items=0 ppid=3033 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:24.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:27.050000 audit[3380]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:27.053300 kernel: kauditd_printk_skb: 29 callbacks suppressed Dec 12 17:25:27.053372 kernel: audit: type=1325 audit(1765560327.050:540): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:27.050000 audit[3380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe90d89f0 a2=0 a3=1 items=0 ppid=3033 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:27.059035 kernel: audit: type=1300 audit(1765560327.050:540): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe90d89f0 a2=0 a3=1 items=0 ppid=3033 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:27.050000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:27.061046 kernel: audit: type=1327 audit(1765560327.050:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:27.062000 audit[3380]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:27.062000 audit[3380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe90d89f0 a2=0 a3=1 items=0 ppid=3033 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:27.069858 kernel: audit: type=1325 audit(1765560327.062:541): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:27.070439 kernel: audit: type=1300 audit(1765560327.062:541): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe90d89f0 a2=0 a3=1 items=0 ppid=3033 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:27.070463 kernel: audit: type=1327 audit(1765560327.062:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:27.062000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:27.100000 audit[3382]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:27.100000 audit[3382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe7002080 a2=0 a3=1 items=0 ppid=3033 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:27.106955 kernel: audit: type=1325 audit(1765560327.100:542): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:27.107023 kernel: audit: type=1300 audit(1765560327.100:542): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe7002080 a2=0 a3=1 items=0 ppid=3033 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:27.107070 kernel: audit: type=1327 audit(1765560327.100:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:27.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:27.112000 audit[3382]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:27.112000 audit[3382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7002080 a2=0 a3=1 items=0 ppid=3033 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:27.112000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:27.116867 kernel: audit: type=1325 audit(1765560327.112:543): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:28.192000 audit[3384]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:28.192000 audit[3384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe0e9f5c0 a2=0 a3=1 items=0 ppid=3033 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:28.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:28.196000 audit[3384]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:28.196000 audit[3384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe0e9f5c0 a2=0 a3=1 items=0 ppid=3033 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:28.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:29.210000 audit[3386]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:29.210000 audit[3386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffce04bde0 a2=0 a3=1 items=0 ppid=3033 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:29.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:29.217000 audit[3386]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:29.217000 audit[3386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffce04bde0 a2=0 a3=1 items=0 ppid=3033 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:29.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:29.823097 systemd[1]: Created slice kubepods-besteffort-poda7ce6759_9c1b_4f29_bb69_14b9db3c6ab2.slice - libcontainer container kubepods-besteffort-poda7ce6759_9c1b_4f29_bb69_14b9db3c6ab2.slice. Dec 12 17:25:29.940207 systemd[1]: Created slice kubepods-besteffort-pod9de5c210_d6aa_4882_9cb1_f0acf6dd117c.slice - libcontainer container kubepods-besteffort-pod9de5c210_d6aa_4882_9cb1_f0acf6dd117c.slice. Dec 12 17:25:29.986924 kubelet[2914]: I1212 17:25:29.986880 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a7ce6759-9c1b-4f29-bb69-14b9db3c6ab2-typha-certs\") pod \"calico-typha-5dff888f8c-6lw9v\" (UID: \"a7ce6759-9c1b-4f29-bb69-14b9db3c6ab2\") " pod="calico-system/calico-typha-5dff888f8c-6lw9v" Dec 12 17:25:29.987417 kubelet[2914]: I1212 17:25:29.987343 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk8m8\" (UniqueName: \"kubernetes.io/projected/a7ce6759-9c1b-4f29-bb69-14b9db3c6ab2-kube-api-access-bk8m8\") pod \"calico-typha-5dff888f8c-6lw9v\" (UID: \"a7ce6759-9c1b-4f29-bb69-14b9db3c6ab2\") " pod="calico-system/calico-typha-5dff888f8c-6lw9v" Dec 12 17:25:29.987417 kubelet[2914]: I1212 17:25:29.987378 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7ce6759-9c1b-4f29-bb69-14b9db3c6ab2-tigera-ca-bundle\") pod \"calico-typha-5dff888f8c-6lw9v\" (UID: \"a7ce6759-9c1b-4f29-bb69-14b9db3c6ab2\") " pod="calico-system/calico-typha-5dff888f8c-6lw9v" Dec 12 17:25:30.088600 kubelet[2914]: I1212 17:25:30.088462 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvdtn\" (UniqueName: \"kubernetes.io/projected/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-kube-api-access-pvdtn\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088600 kubelet[2914]: I1212 17:25:30.088526 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-node-certs\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088600 kubelet[2914]: I1212 17:25:30.088547 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-tigera-ca-bundle\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088600 kubelet[2914]: I1212 17:25:30.088563 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-cni-bin-dir\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088600 kubelet[2914]: I1212 17:25:30.088578 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-cni-net-dir\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088800 kubelet[2914]: I1212 17:25:30.088604 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-lib-modules\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088800 kubelet[2914]: I1212 17:25:30.088620 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-var-run-calico\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088800 kubelet[2914]: I1212 17:25:30.088637 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-policysync\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088800 kubelet[2914]: I1212 17:25:30.088651 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-var-lib-calico\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088800 kubelet[2914]: I1212 17:25:30.088664 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-xtables-lock\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088954 kubelet[2914]: I1212 17:25:30.088694 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-cni-log-dir\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.088954 kubelet[2914]: I1212 17:25:30.088708 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9de5c210-d6aa-4882-9cb1-f0acf6dd117c-flexvol-driver-host\") pod \"calico-node-brsc9\" (UID: \"9de5c210-d6aa-4882-9cb1-f0acf6dd117c\") " pod="calico-system/calico-node-brsc9" Dec 12 17:25:30.128537 containerd[1693]: time="2025-12-12T17:25:30.128485152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dff888f8c-6lw9v,Uid:a7ce6759-9c1b-4f29-bb69-14b9db3c6ab2,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:30.133263 kubelet[2914]: E1212 17:25:30.133198 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:25:30.165796 containerd[1693]: time="2025-12-12T17:25:30.165754702Z" level=info msg="connecting to shim 25773b8201d51ec79338b33ec1fb466d29c61427b4ea1930e90c86c95cb57923" address="unix:///run/containerd/s/49bd74ea85f36e8373b7b0d302a16adfc5655baba603cf648c98b9b1b2d4764d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:30.190062 systemd[1]: Started cri-containerd-25773b8201d51ec79338b33ec1fb466d29c61427b4ea1930e90c86c95cb57923.scope - libcontainer container 25773b8201d51ec79338b33ec1fb466d29c61427b4ea1930e90c86c95cb57923. Dec 12 17:25:30.192121 kubelet[2914]: E1212 17:25:30.192096 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.192121 kubelet[2914]: W1212 17:25:30.192119 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.192247 kubelet[2914]: E1212 17:25:30.192148 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.201665 kubelet[2914]: E1212 17:25:30.201629 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.201665 kubelet[2914]: W1212 17:25:30.201652 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.201815 kubelet[2914]: E1212 17:25:30.201674 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.205917 kubelet[2914]: E1212 17:25:30.205885 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.206160 kubelet[2914]: W1212 17:25:30.205934 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.206160 kubelet[2914]: E1212 17:25:30.205958 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.207000 audit: BPF prog-id=156 op=LOAD Dec 12 17:25:30.208000 audit: BPF prog-id=157 op=LOAD Dec 12 17:25:30.208000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3396 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235373733623832303164353165633739333338623333656331666234 Dec 12 17:25:30.208000 audit: BPF prog-id=157 op=UNLOAD Dec 12 17:25:30.208000 audit[3410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235373733623832303164353165633739333338623333656331666234 Dec 12 17:25:30.209000 audit: BPF prog-id=158 op=LOAD Dec 12 17:25:30.209000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3396 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235373733623832303164353165633739333338623333656331666234 Dec 12 17:25:30.209000 audit: BPF prog-id=159 op=LOAD Dec 12 17:25:30.209000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3396 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235373733623832303164353165633739333338623333656331666234 Dec 12 17:25:30.209000 audit: BPF prog-id=159 op=UNLOAD Dec 12 17:25:30.209000 audit[3410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235373733623832303164353165633739333338623333656331666234 Dec 12 17:25:30.209000 audit: BPF prog-id=158 op=UNLOAD Dec 12 17:25:30.209000 audit[3410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235373733623832303164353165633739333338623333656331666234 Dec 12 17:25:30.209000 audit: BPF prog-id=160 op=LOAD Dec 12 17:25:30.209000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3396 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235373733623832303164353165633739333338623333656331666234 Dec 12 17:25:30.240000 audit[3438]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:30.240000 audit[3438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcaaad0f0 a2=0 a3=1 items=0 ppid=3033 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:30.243166 containerd[1693]: time="2025-12-12T17:25:30.243127895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-brsc9,Uid:9de5c210-d6aa-4882-9cb1-f0acf6dd117c,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:30.246000 audit[3438]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:30.246000 audit[3438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcaaad0f0 a2=0 a3=1 items=0 ppid=3033 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:30.253300 containerd[1693]: time="2025-12-12T17:25:30.253183786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dff888f8c-6lw9v,Uid:a7ce6759-9c1b-4f29-bb69-14b9db3c6ab2,Namespace:calico-system,Attempt:0,} returns sandbox id \"25773b8201d51ec79338b33ec1fb466d29c61427b4ea1930e90c86c95cb57923\"" Dec 12 17:25:30.254860 containerd[1693]: time="2025-12-12T17:25:30.254805834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:25:30.275374 containerd[1693]: time="2025-12-12T17:25:30.275324059Z" level=info msg="connecting to shim b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768" address="unix:///run/containerd/s/f2b8f6728a493d4502cc07e0fa855bd61cb28ca58bf4a6eea45954497ad08b1e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:30.291440 kubelet[2914]: E1212 17:25:30.291413 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.291440 kubelet[2914]: W1212 17:25:30.291434 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.291576 kubelet[2914]: E1212 17:25:30.291459 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.291912 kubelet[2914]: E1212 17:25:30.291890 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.291954 kubelet[2914]: W1212 17:25:30.291911 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.291954 kubelet[2914]: E1212 17:25:30.291940 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.292119 kubelet[2914]: E1212 17:25:30.292105 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.292153 kubelet[2914]: W1212 17:25:30.292120 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.292153 kubelet[2914]: E1212 17:25:30.292132 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.292153 kubelet[2914]: I1212 17:25:30.292095 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38591688-bf7e-4006-99b5-49217c275f18-kubelet-dir\") pod \"csi-node-driver-nmqkr\" (UID: \"38591688-bf7e-4006-99b5-49217c275f18\") " pod="calico-system/csi-node-driver-nmqkr" Dec 12 17:25:30.292371 kubelet[2914]: E1212 17:25:30.292356 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.292413 kubelet[2914]: W1212 17:25:30.292370 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.292413 kubelet[2914]: E1212 17:25:30.292382 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.292456 kubelet[2914]: I1212 17:25:30.292416 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/38591688-bf7e-4006-99b5-49217c275f18-varrun\") pod \"csi-node-driver-nmqkr\" (UID: \"38591688-bf7e-4006-99b5-49217c275f18\") " pod="calico-system/csi-node-driver-nmqkr" Dec 12 17:25:30.292627 kubelet[2914]: E1212 17:25:30.292611 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.292627 kubelet[2914]: W1212 17:25:30.292626 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.292685 kubelet[2914]: E1212 17:25:30.292644 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.292685 kubelet[2914]: I1212 17:25:30.292678 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/38591688-bf7e-4006-99b5-49217c275f18-registration-dir\") pod \"csi-node-driver-nmqkr\" (UID: \"38591688-bf7e-4006-99b5-49217c275f18\") " pod="calico-system/csi-node-driver-nmqkr" Dec 12 17:25:30.292880 kubelet[2914]: E1212 17:25:30.292862 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.292880 kubelet[2914]: W1212 17:25:30.292878 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.292971 kubelet[2914]: E1212 17:25:30.292889 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.292971 kubelet[2914]: I1212 17:25:30.292925 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/38591688-bf7e-4006-99b5-49217c275f18-socket-dir\") pod \"csi-node-driver-nmqkr\" (UID: \"38591688-bf7e-4006-99b5-49217c275f18\") " pod="calico-system/csi-node-driver-nmqkr" Dec 12 17:25:30.293171 kubelet[2914]: E1212 17:25:30.293151 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.293171 kubelet[2914]: W1212 17:25:30.293169 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.293229 kubelet[2914]: E1212 17:25:30.293184 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.293229 kubelet[2914]: I1212 17:25:30.293218 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srv2g\" (UniqueName: \"kubernetes.io/projected/38591688-bf7e-4006-99b5-49217c275f18-kube-api-access-srv2g\") pod \"csi-node-driver-nmqkr\" (UID: \"38591688-bf7e-4006-99b5-49217c275f18\") " pod="calico-system/csi-node-driver-nmqkr" Dec 12 17:25:30.293649 kubelet[2914]: E1212 17:25:30.293561 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.293649 kubelet[2914]: W1212 17:25:30.293633 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.293649 kubelet[2914]: E1212 17:25:30.293648 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.294026 kubelet[2914]: E1212 17:25:30.293856 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.294026 kubelet[2914]: W1212 17:25:30.293872 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.294026 kubelet[2914]: E1212 17:25:30.293881 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.294144 kubelet[2914]: E1212 17:25:30.294110 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.294144 kubelet[2914]: W1212 17:25:30.294121 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.294144 kubelet[2914]: E1212 17:25:30.294132 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.294320 systemd[1]: Started cri-containerd-b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768.scope - libcontainer container b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768. Dec 12 17:25:30.294720 kubelet[2914]: E1212 17:25:30.294562 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.294720 kubelet[2914]: W1212 17:25:30.294577 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.294720 kubelet[2914]: E1212 17:25:30.294591 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.295816 kubelet[2914]: E1212 17:25:30.295783 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.295994 kubelet[2914]: W1212 17:25:30.295872 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.295994 kubelet[2914]: E1212 17:25:30.295894 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.296120 kubelet[2914]: E1212 17:25:30.296093 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.296120 kubelet[2914]: W1212 17:25:30.296114 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.296193 kubelet[2914]: E1212 17:25:30.296125 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.296332 kubelet[2914]: E1212 17:25:30.296318 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.296332 kubelet[2914]: W1212 17:25:30.296330 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.296393 kubelet[2914]: E1212 17:25:30.296343 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.299170 kubelet[2914]: E1212 17:25:30.299051 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.299170 kubelet[2914]: W1212 17:25:30.299079 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.299170 kubelet[2914]: E1212 17:25:30.299101 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.304000 audit: BPF prog-id=161 op=LOAD Dec 12 17:25:30.305000 audit: BPF prog-id=162 op=LOAD Dec 12 17:25:30.305000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3453 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313363393965643237323964393337343835613639616131323161 Dec 12 17:25:30.305000 audit: BPF prog-id=162 op=UNLOAD Dec 12 17:25:30.305000 audit[3464]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313363393965643237323964393337343835613639616131323161 Dec 12 17:25:30.305000 audit: BPF prog-id=163 op=LOAD Dec 12 17:25:30.305000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3453 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313363393965643237323964393337343835613639616131323161 Dec 12 17:25:30.305000 audit: BPF prog-id=164 op=LOAD Dec 12 17:25:30.305000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3453 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313363393965643237323964393337343835613639616131323161 Dec 12 17:25:30.305000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:25:30.305000 audit[3464]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313363393965643237323964393337343835613639616131323161 Dec 12 17:25:30.305000 audit: BPF prog-id=163 op=UNLOAD Dec 12 17:25:30.305000 audit[3464]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313363393965643237323964393337343835613639616131323161 Dec 12 17:25:30.305000 audit: BPF prog-id=165 op=LOAD Dec 12 17:25:30.305000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3453 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:30.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313363393965643237323964393337343835613639616131323161 Dec 12 17:25:30.321386 containerd[1693]: time="2025-12-12T17:25:30.321337532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-brsc9,Uid:9de5c210-d6aa-4882-9cb1-f0acf6dd117c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768\"" Dec 12 17:25:30.394867 kubelet[2914]: E1212 17:25:30.394591 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.394867 kubelet[2914]: W1212 17:25:30.394619 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.394867 kubelet[2914]: E1212 17:25:30.394639 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.395137 kubelet[2914]: E1212 17:25:30.395046 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.395137 kubelet[2914]: W1212 17:25:30.395057 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.395417 kubelet[2914]: E1212 17:25:30.395165 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.395546 kubelet[2914]: E1212 17:25:30.395533 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.395546 kubelet[2914]: W1212 17:25:30.395568 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.395546 kubelet[2914]: E1212 17:25:30.395582 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.396616 kubelet[2914]: E1212 17:25:30.396529 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.396616 kubelet[2914]: W1212 17:25:30.396548 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.396616 kubelet[2914]: E1212 17:25:30.396564 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.397098 kubelet[2914]: E1212 17:25:30.397080 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.397098 kubelet[2914]: W1212 17:25:30.397095 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.397213 kubelet[2914]: E1212 17:25:30.397108 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.397738 kubelet[2914]: E1212 17:25:30.397592 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.397738 kubelet[2914]: W1212 17:25:30.397606 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.397738 kubelet[2914]: E1212 17:25:30.397617 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.397871 kubelet[2914]: E1212 17:25:30.397784 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.397871 kubelet[2914]: W1212 17:25:30.397793 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.397871 kubelet[2914]: E1212 17:25:30.397802 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.398760 kubelet[2914]: E1212 17:25:30.398597 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.398760 kubelet[2914]: W1212 17:25:30.398613 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.398760 kubelet[2914]: E1212 17:25:30.398626 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.399308 kubelet[2914]: E1212 17:25:30.399239 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.399308 kubelet[2914]: W1212 17:25:30.399254 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.399308 kubelet[2914]: E1212 17:25:30.399266 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.399458 kubelet[2914]: E1212 17:25:30.399440 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.399458 kubelet[2914]: W1212 17:25:30.399450 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.399458 kubelet[2914]: E1212 17:25:30.399459 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.400404 kubelet[2914]: E1212 17:25:30.400378 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.400404 kubelet[2914]: W1212 17:25:30.400393 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.400404 kubelet[2914]: E1212 17:25:30.400408 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.400695 kubelet[2914]: E1212 17:25:30.400664 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.400695 kubelet[2914]: W1212 17:25:30.400678 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.400695 kubelet[2914]: E1212 17:25:30.400690 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.401269 kubelet[2914]: E1212 17:25:30.401231 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.401269 kubelet[2914]: W1212 17:25:30.401249 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.401269 kubelet[2914]: E1212 17:25:30.401264 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.401505 kubelet[2914]: E1212 17:25:30.401493 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.401505 kubelet[2914]: W1212 17:25:30.401505 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.401565 kubelet[2914]: E1212 17:25:30.401515 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.401665 kubelet[2914]: E1212 17:25:30.401656 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.401693 kubelet[2914]: W1212 17:25:30.401665 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.401693 kubelet[2914]: E1212 17:25:30.401674 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.401814 kubelet[2914]: E1212 17:25:30.401801 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.401873 kubelet[2914]: W1212 17:25:30.401824 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.401873 kubelet[2914]: E1212 17:25:30.401855 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.401995 kubelet[2914]: E1212 17:25:30.401985 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.402022 kubelet[2914]: W1212 17:25:30.401995 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.402022 kubelet[2914]: E1212 17:25:30.402003 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.402141 kubelet[2914]: E1212 17:25:30.402132 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.402173 kubelet[2914]: W1212 17:25:30.402141 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.402173 kubelet[2914]: E1212 17:25:30.402149 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.402449 kubelet[2914]: E1212 17:25:30.402432 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.402479 kubelet[2914]: W1212 17:25:30.402451 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.402479 kubelet[2914]: E1212 17:25:30.402465 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.402646 kubelet[2914]: E1212 17:25:30.402632 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.402678 kubelet[2914]: W1212 17:25:30.402647 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.402678 kubelet[2914]: E1212 17:25:30.402657 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.402873 kubelet[2914]: E1212 17:25:30.402857 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.402908 kubelet[2914]: W1212 17:25:30.402873 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.402908 kubelet[2914]: E1212 17:25:30.402886 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.403115 kubelet[2914]: E1212 17:25:30.403102 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.403146 kubelet[2914]: W1212 17:25:30.403116 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.403146 kubelet[2914]: E1212 17:25:30.403126 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.403565 kubelet[2914]: E1212 17:25:30.403541 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.403565 kubelet[2914]: W1212 17:25:30.403561 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.403629 kubelet[2914]: E1212 17:25:30.403575 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.403888 kubelet[2914]: E1212 17:25:30.403748 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.403888 kubelet[2914]: W1212 17:25:30.403760 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.403888 kubelet[2914]: E1212 17:25:30.403769 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.404139 kubelet[2914]: E1212 17:25:30.403935 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.404139 kubelet[2914]: W1212 17:25:30.403944 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.404139 kubelet[2914]: E1212 17:25:30.403952 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:30.414467 kubelet[2914]: E1212 17:25:30.414391 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:30.414467 kubelet[2914]: W1212 17:25:30.414412 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:30.414467 kubelet[2914]: E1212 17:25:30.414430 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:31.718316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010969450.mount: Deactivated successfully. Dec 12 17:25:31.974978 kubelet[2914]: E1212 17:25:31.974743 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:25:32.952533 containerd[1693]: time="2025-12-12T17:25:32.952456421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:32.953752 containerd[1693]: time="2025-12-12T17:25:32.953680747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 12 17:25:32.954964 containerd[1693]: time="2025-12-12T17:25:32.954919234Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:32.957146 containerd[1693]: time="2025-12-12T17:25:32.957096005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:32.957775 containerd[1693]: time="2025-12-12T17:25:32.957729488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.702855293s" Dec 12 17:25:32.957775 containerd[1693]: time="2025-12-12T17:25:32.957761048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:25:32.958685 containerd[1693]: time="2025-12-12T17:25:32.958650453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:25:32.970452 containerd[1693]: time="2025-12-12T17:25:32.970408152Z" level=info msg="CreateContainer within sandbox \"25773b8201d51ec79338b33ec1fb466d29c61427b4ea1930e90c86c95cb57923\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:25:32.983617 containerd[1693]: time="2025-12-12T17:25:32.983552379Z" level=info msg="Container 70bb47119abb8a6079687d336ee3818c6be3016702c7ca22f5446c0fa38178e4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:32.994887 containerd[1693]: time="2025-12-12T17:25:32.994815356Z" level=info msg="CreateContainer within sandbox \"25773b8201d51ec79338b33ec1fb466d29c61427b4ea1930e90c86c95cb57923\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"70bb47119abb8a6079687d336ee3818c6be3016702c7ca22f5446c0fa38178e4\"" Dec 12 17:25:32.995601 containerd[1693]: time="2025-12-12T17:25:32.995569880Z" level=info msg="StartContainer for \"70bb47119abb8a6079687d336ee3818c6be3016702c7ca22f5446c0fa38178e4\"" Dec 12 17:25:32.996931 containerd[1693]: time="2025-12-12T17:25:32.996880807Z" level=info msg="connecting to shim 70bb47119abb8a6079687d336ee3818c6be3016702c7ca22f5446c0fa38178e4" address="unix:///run/containerd/s/49bd74ea85f36e8373b7b0d302a16adfc5655baba603cf648c98b9b1b2d4764d" protocol=ttrpc version=3 Dec 12 17:25:33.018040 systemd[1]: Started cri-containerd-70bb47119abb8a6079687d336ee3818c6be3016702c7ca22f5446c0fa38178e4.scope - libcontainer container 70bb47119abb8a6079687d336ee3818c6be3016702c7ca22f5446c0fa38178e4. Dec 12 17:25:33.030856 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 12 17:25:33.030965 kernel: audit: type=1334 audit(1765560333.027:566): prog-id=166 op=LOAD Dec 12 17:25:33.027000 audit: BPF prog-id=166 op=LOAD Dec 12 17:25:33.028000 audit: BPF prog-id=167 op=LOAD Dec 12 17:25:33.028000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.031965 kernel: audit: type=1334 audit(1765560333.028:567): prog-id=167 op=LOAD Dec 12 17:25:33.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.037993 kernel: audit: type=1300 audit(1765560333.028:567): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.038061 kernel: audit: type=1327 audit(1765560333.028:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.028000 audit: BPF prog-id=167 op=UNLOAD Dec 12 17:25:33.028000 audit[3542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.041695 kernel: audit: type=1334 audit(1765560333.028:568): prog-id=167 op=UNLOAD Dec 12 17:25:33.041757 kernel: audit: type=1300 audit(1765560333.028:568): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.041781 kernel: audit: type=1327 audit(1765560333.028:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.028000 audit: BPF prog-id=168 op=LOAD Dec 12 17:25:33.045286 kernel: audit: type=1334 audit(1765560333.028:569): prog-id=168 op=LOAD Dec 12 17:25:33.028000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.048496 kernel: audit: type=1300 audit(1765560333.028:569): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.048591 kernel: audit: type=1327 audit(1765560333.028:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.029000 audit: BPF prog-id=169 op=LOAD Dec 12 17:25:33.029000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.029000 audit: BPF prog-id=169 op=UNLOAD Dec 12 17:25:33.029000 audit[3542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.029000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:25:33.029000 audit[3542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.029000 audit: BPF prog-id=170 op=LOAD Dec 12 17:25:33.029000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3396 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626234373131396162623861363037393638376433333665653338 Dec 12 17:25:33.069195 containerd[1693]: time="2025-12-12T17:25:33.069146734Z" level=info msg="StartContainer for \"70bb47119abb8a6079687d336ee3818c6be3016702c7ca22f5446c0fa38178e4\" returns successfully" Dec 12 17:25:33.309168 kubelet[2914]: E1212 17:25:33.309049 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.309168 kubelet[2914]: W1212 17:25:33.309082 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.309168 kubelet[2914]: E1212 17:25:33.309106 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.310634 kubelet[2914]: E1212 17:25:33.310602 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.310819 kubelet[2914]: W1212 17:25:33.310628 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.310819 kubelet[2914]: E1212 17:25:33.310678 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.311484 kubelet[2914]: E1212 17:25:33.311393 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.311484 kubelet[2914]: W1212 17:25:33.311410 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.311484 kubelet[2914]: E1212 17:25:33.311425 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.312081 kubelet[2914]: E1212 17:25:33.312021 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.312081 kubelet[2914]: W1212 17:25:33.312046 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.312081 kubelet[2914]: E1212 17:25:33.312061 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.314058 kubelet[2914]: E1212 17:25:33.313969 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.314058 kubelet[2914]: W1212 17:25:33.313994 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.314058 kubelet[2914]: E1212 17:25:33.314009 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.314468 kubelet[2914]: E1212 17:25:33.314247 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.314468 kubelet[2914]: W1212 17:25:33.314263 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.314468 kubelet[2914]: E1212 17:25:33.314275 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.314629 kubelet[2914]: E1212 17:25:33.314599 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.314629 kubelet[2914]: W1212 17:25:33.314619 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.314686 kubelet[2914]: E1212 17:25:33.314631 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.316659 kubelet[2914]: E1212 17:25:33.316504 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.316659 kubelet[2914]: W1212 17:25:33.316655 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.316763 kubelet[2914]: E1212 17:25:33.316673 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.317920 kubelet[2914]: E1212 17:25:33.317888 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.317920 kubelet[2914]: W1212 17:25:33.317913 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.317920 kubelet[2914]: E1212 17:25:33.317927 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.320085 kubelet[2914]: E1212 17:25:33.320055 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.320085 kubelet[2914]: W1212 17:25:33.320079 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.320085 kubelet[2914]: E1212 17:25:33.320093 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.320307 kubelet[2914]: E1212 17:25:33.320293 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.320307 kubelet[2914]: W1212 17:25:33.320305 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.320377 kubelet[2914]: E1212 17:25:33.320315 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.320444 kubelet[2914]: E1212 17:25:33.320433 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.320444 kubelet[2914]: W1212 17:25:33.320443 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.320508 kubelet[2914]: E1212 17:25:33.320451 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.320590 kubelet[2914]: E1212 17:25:33.320575 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.320590 kubelet[2914]: W1212 17:25:33.320587 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.320715 kubelet[2914]: E1212 17:25:33.320594 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.320744 kubelet[2914]: E1212 17:25:33.320725 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.320744 kubelet[2914]: W1212 17:25:33.320731 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.320744 kubelet[2914]: E1212 17:25:33.320741 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.321978 kubelet[2914]: E1212 17:25:33.321949 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.321978 kubelet[2914]: W1212 17:25:33.321973 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.322141 kubelet[2914]: E1212 17:25:33.321987 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.322517 kubelet[2914]: E1212 17:25:33.322480 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.322517 kubelet[2914]: W1212 17:25:33.322501 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.322517 kubelet[2914]: E1212 17:25:33.322514 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.322800 kubelet[2914]: E1212 17:25:33.322764 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.322940 kubelet[2914]: W1212 17:25:33.322801 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.322940 kubelet[2914]: E1212 17:25:33.322813 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.323106 kubelet[2914]: E1212 17:25:33.323086 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.323106 kubelet[2914]: W1212 17:25:33.323097 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.323371 kubelet[2914]: E1212 17:25:33.323108 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.323476 kubelet[2914]: E1212 17:25:33.323459 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.323476 kubelet[2914]: W1212 17:25:33.323474 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.323943 kubelet[2914]: E1212 17:25:33.323913 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.324138 kubelet[2914]: E1212 17:25:33.324119 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.324138 kubelet[2914]: W1212 17:25:33.324137 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.324212 kubelet[2914]: E1212 17:25:33.324152 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.325163 kubelet[2914]: E1212 17:25:33.325142 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.325163 kubelet[2914]: W1212 17:25:33.325159 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.325163 kubelet[2914]: E1212 17:25:33.325170 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.325462 kubelet[2914]: E1212 17:25:33.325448 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.325462 kubelet[2914]: W1212 17:25:33.325460 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.325540 kubelet[2914]: E1212 17:25:33.325470 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.325826 kubelet[2914]: E1212 17:25:33.325629 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.325826 kubelet[2914]: W1212 17:25:33.325643 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.325826 kubelet[2914]: E1212 17:25:33.325652 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.325826 kubelet[2914]: E1212 17:25:33.325788 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.325826 kubelet[2914]: W1212 17:25:33.325795 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.325826 kubelet[2914]: E1212 17:25:33.325805 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.327084 kubelet[2914]: E1212 17:25:33.327052 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.327084 kubelet[2914]: W1212 17:25:33.327076 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.327194 kubelet[2914]: E1212 17:25:33.327090 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.328679 kubelet[2914]: E1212 17:25:33.327257 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.328679 kubelet[2914]: W1212 17:25:33.327274 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.328679 kubelet[2914]: E1212 17:25:33.327302 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.328679 kubelet[2914]: E1212 17:25:33.327463 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.328679 kubelet[2914]: W1212 17:25:33.327470 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.328679 kubelet[2914]: E1212 17:25:33.327478 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.328679 kubelet[2914]: E1212 17:25:33.327610 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.328679 kubelet[2914]: W1212 17:25:33.327617 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.328679 kubelet[2914]: E1212 17:25:33.327624 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.328679 kubelet[2914]: E1212 17:25:33.327740 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.329005 kubelet[2914]: W1212 17:25:33.327747 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.329005 kubelet[2914]: E1212 17:25:33.327754 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.329005 kubelet[2914]: E1212 17:25:33.327933 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.329005 kubelet[2914]: W1212 17:25:33.327942 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.329005 kubelet[2914]: E1212 17:25:33.327951 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.329131 kubelet[2914]: E1212 17:25:33.329097 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.329131 kubelet[2914]: W1212 17:25:33.329124 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.329233 kubelet[2914]: E1212 17:25:33.329140 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.329808 kubelet[2914]: E1212 17:25:33.329779 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.329808 kubelet[2914]: W1212 17:25:33.329802 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.329887 kubelet[2914]: E1212 17:25:33.329817 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.331232 kubelet[2914]: E1212 17:25:33.331196 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:33.331232 kubelet[2914]: W1212 17:25:33.331219 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:33.331232 kubelet[2914]: E1212 17:25:33.331233 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:33.974735 kubelet[2914]: E1212 17:25:33.974642 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:25:34.266765 kubelet[2914]: I1212 17:25:34.266660 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:25:34.329690 kubelet[2914]: E1212 17:25:34.329642 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.329690 kubelet[2914]: W1212 17:25:34.329670 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.329690 kubelet[2914]: E1212 17:25:34.329691 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.330096 kubelet[2914]: E1212 17:25:34.329892 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.330096 kubelet[2914]: W1212 17:25:34.329902 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.330096 kubelet[2914]: E1212 17:25:34.329911 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.330096 kubelet[2914]: E1212 17:25:34.330088 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.330096 kubelet[2914]: W1212 17:25:34.330098 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.330202 kubelet[2914]: E1212 17:25:34.330108 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.330286 kubelet[2914]: E1212 17:25:34.330247 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.330286 kubelet[2914]: W1212 17:25:34.330260 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.330286 kubelet[2914]: E1212 17:25:34.330269 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.330428 kubelet[2914]: E1212 17:25:34.330403 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.330428 kubelet[2914]: W1212 17:25:34.330416 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.330428 kubelet[2914]: E1212 17:25:34.330427 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.330563 kubelet[2914]: E1212 17:25:34.330543 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.330563 kubelet[2914]: W1212 17:25:34.330554 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.330563 kubelet[2914]: E1212 17:25:34.330561 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.330694 kubelet[2914]: E1212 17:25:34.330674 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.330694 kubelet[2914]: W1212 17:25:34.330684 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.330694 kubelet[2914]: E1212 17:25:34.330692 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.330819 kubelet[2914]: E1212 17:25:34.330809 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.330819 kubelet[2914]: W1212 17:25:34.330819 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.330879 kubelet[2914]: E1212 17:25:34.330826 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.331020 kubelet[2914]: E1212 17:25:34.330982 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.331020 kubelet[2914]: W1212 17:25:34.331000 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.331020 kubelet[2914]: E1212 17:25:34.331009 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.331133 kubelet[2914]: E1212 17:25:34.331123 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.331133 kubelet[2914]: W1212 17:25:34.331132 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.331183 kubelet[2914]: E1212 17:25:34.331140 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.331260 kubelet[2914]: E1212 17:25:34.331250 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.331285 kubelet[2914]: W1212 17:25:34.331259 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.331285 kubelet[2914]: E1212 17:25:34.331267 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.331398 kubelet[2914]: E1212 17:25:34.331389 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.331422 kubelet[2914]: W1212 17:25:34.331398 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.331422 kubelet[2914]: E1212 17:25:34.331405 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.331529 kubelet[2914]: E1212 17:25:34.331519 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.331554 kubelet[2914]: W1212 17:25:34.331528 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.331554 kubelet[2914]: E1212 17:25:34.331536 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.331670 kubelet[2914]: E1212 17:25:34.331660 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.331694 kubelet[2914]: W1212 17:25:34.331670 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.331694 kubelet[2914]: E1212 17:25:34.331678 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.331806 kubelet[2914]: E1212 17:25:34.331796 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.331840 kubelet[2914]: W1212 17:25:34.331805 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.331840 kubelet[2914]: E1212 17:25:34.331813 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.332106 kubelet[2914]: E1212 17:25:34.332082 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.332106 kubelet[2914]: W1212 17:25:34.332094 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.332106 kubelet[2914]: E1212 17:25:34.332104 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.332275 kubelet[2914]: E1212 17:25:34.332264 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.332302 kubelet[2914]: W1212 17:25:34.332275 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.332302 kubelet[2914]: E1212 17:25:34.332283 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.332460 kubelet[2914]: E1212 17:25:34.332449 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.332483 kubelet[2914]: W1212 17:25:34.332460 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.332483 kubelet[2914]: E1212 17:25:34.332468 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.332676 kubelet[2914]: E1212 17:25:34.332662 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.332676 kubelet[2914]: W1212 17:25:34.332674 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.332735 kubelet[2914]: E1212 17:25:34.332682 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.332828 kubelet[2914]: E1212 17:25:34.332816 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.332874 kubelet[2914]: W1212 17:25:34.332827 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.332874 kubelet[2914]: E1212 17:25:34.332849 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.333007 kubelet[2914]: E1212 17:25:34.332994 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.333081 kubelet[2914]: W1212 17:25:34.333069 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.333128 kubelet[2914]: E1212 17:25:34.333082 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.333265 kubelet[2914]: E1212 17:25:34.333252 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.333265 kubelet[2914]: W1212 17:25:34.333263 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.333320 kubelet[2914]: E1212 17:25:34.333272 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.333669 kubelet[2914]: E1212 17:25:34.333543 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.333669 kubelet[2914]: W1212 17:25:34.333560 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.333669 kubelet[2914]: E1212 17:25:34.333573 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.333860 kubelet[2914]: E1212 17:25:34.333825 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.333917 kubelet[2914]: W1212 17:25:34.333905 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.333979 kubelet[2914]: E1212 17:25:34.333968 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.334290 kubelet[2914]: E1212 17:25:34.334189 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.334290 kubelet[2914]: W1212 17:25:34.334203 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.334290 kubelet[2914]: E1212 17:25:34.334213 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.334444 kubelet[2914]: E1212 17:25:34.334432 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.334499 kubelet[2914]: W1212 17:25:34.334489 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.334659 kubelet[2914]: E1212 17:25:34.334543 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.334768 kubelet[2914]: E1212 17:25:34.334755 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.334826 kubelet[2914]: W1212 17:25:34.334815 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.334919 kubelet[2914]: E1212 17:25:34.334904 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.335435 kubelet[2914]: E1212 17:25:34.335147 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.335435 kubelet[2914]: W1212 17:25:34.335159 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.335435 kubelet[2914]: E1212 17:25:34.335169 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.335435 kubelet[2914]: E1212 17:25:34.335429 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.335577 kubelet[2914]: W1212 17:25:34.335445 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.335577 kubelet[2914]: E1212 17:25:34.335460 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.335622 kubelet[2914]: E1212 17:25:34.335605 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.335622 kubelet[2914]: W1212 17:25:34.335614 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.335661 kubelet[2914]: E1212 17:25:34.335623 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.335847 kubelet[2914]: E1212 17:25:34.335817 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.335847 kubelet[2914]: W1212 17:25:34.335828 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.335847 kubelet[2914]: E1212 17:25:34.335847 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.336162 kubelet[2914]: E1212 17:25:34.336148 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.336162 kubelet[2914]: W1212 17:25:34.336161 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.336218 kubelet[2914]: E1212 17:25:34.336171 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.336340 kubelet[2914]: E1212 17:25:34.336328 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:25:34.336340 kubelet[2914]: W1212 17:25:34.336338 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:25:34.336403 kubelet[2914]: E1212 17:25:34.336346 2914 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:25:34.551126 containerd[1693]: time="2025-12-12T17:25:34.550987023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:34.552873 containerd[1693]: time="2025-12-12T17:25:34.552786752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:34.554591 containerd[1693]: time="2025-12-12T17:25:34.554543721Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:34.558065 containerd[1693]: time="2025-12-12T17:25:34.558007699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:34.558909 containerd[1693]: time="2025-12-12T17:25:34.558876583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.60019021s" Dec 12 17:25:34.558976 containerd[1693]: time="2025-12-12T17:25:34.558913304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:25:34.565201 containerd[1693]: time="2025-12-12T17:25:34.565165775Z" level=info msg="CreateContainer within sandbox \"b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:25:34.574349 containerd[1693]: time="2025-12-12T17:25:34.574004140Z" level=info msg="Container 0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:34.582402 containerd[1693]: time="2025-12-12T17:25:34.582309302Z" level=info msg="CreateContainer within sandbox \"b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d\"" Dec 12 17:25:34.583227 containerd[1693]: time="2025-12-12T17:25:34.582914466Z" level=info msg="StartContainer for \"0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d\"" Dec 12 17:25:34.585012 containerd[1693]: time="2025-12-12T17:25:34.584975556Z" level=info msg="connecting to shim 0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d" address="unix:///run/containerd/s/f2b8f6728a493d4502cc07e0fa855bd61cb28ca58bf4a6eea45954497ad08b1e" protocol=ttrpc version=3 Dec 12 17:25:34.617162 systemd[1]: Started cri-containerd-0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d.scope - libcontainer container 0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d. Dec 12 17:25:34.675000 audit: BPF prog-id=171 op=LOAD Dec 12 17:25:34.675000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3453 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063626365636535363366653038303936323730646366643831383337 Dec 12 17:25:34.675000 audit: BPF prog-id=172 op=LOAD Dec 12 17:25:34.675000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3453 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063626365636535363366653038303936323730646366643831383337 Dec 12 17:25:34.675000 audit: BPF prog-id=172 op=UNLOAD Dec 12 17:25:34.675000 audit[3651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063626365636535363366653038303936323730646366643831383337 Dec 12 17:25:34.675000 audit: BPF prog-id=171 op=UNLOAD Dec 12 17:25:34.675000 audit[3651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063626365636535363366653038303936323730646366643831383337 Dec 12 17:25:34.675000 audit: BPF prog-id=173 op=LOAD Dec 12 17:25:34.675000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3453 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063626365636535363366653038303936323730646366643831383337 Dec 12 17:25:34.697001 containerd[1693]: time="2025-12-12T17:25:34.696947725Z" level=info msg="StartContainer for \"0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d\" returns successfully" Dec 12 17:25:34.708407 systemd[1]: cri-containerd-0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d.scope: Deactivated successfully. Dec 12 17:25:34.713083 containerd[1693]: time="2025-12-12T17:25:34.713036687Z" level=info msg="received container exit event container_id:\"0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d\" id:\"0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d\" pid:3664 exited_at:{seconds:1765560334 nanos:712740325}" Dec 12 17:25:34.714000 audit: BPF prog-id=173 op=UNLOAD Dec 12 17:25:34.733133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cbcece563fe08096270dcfd81837827877796f555b5a6d22bc8d59289521c6d-rootfs.mount: Deactivated successfully. Dec 12 17:25:35.272323 containerd[1693]: time="2025-12-12T17:25:35.271887166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:25:35.286932 kubelet[2914]: I1212 17:25:35.286741 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5dff888f8c-6lw9v" podStartSLOduration=3.582729823 podStartE2EDuration="6.286727202s" podCreationTimestamp="2025-12-12 17:25:29 +0000 UTC" firstStartedPulling="2025-12-12 17:25:30.254551233 +0000 UTC m=+23.363404359" lastFinishedPulling="2025-12-12 17:25:32.958548612 +0000 UTC m=+26.067401738" observedRunningTime="2025-12-12 17:25:33.287130882 +0000 UTC m=+26.395984048" watchObservedRunningTime="2025-12-12 17:25:35.286727202 +0000 UTC m=+28.395580368" Dec 12 17:25:35.975057 kubelet[2914]: E1212 17:25:35.975003 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:25:37.974150 kubelet[2914]: E1212 17:25:37.974090 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:25:39.347210 containerd[1693]: time="2025-12-12T17:25:39.347138513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:39.348673 containerd[1693]: time="2025-12-12T17:25:39.348539880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 12 17:25:39.350708 containerd[1693]: time="2025-12-12T17:25:39.349847686Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:39.353752 containerd[1693]: time="2025-12-12T17:25:39.353717506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:39.354293 containerd[1693]: time="2025-12-12T17:25:39.354268469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 4.082342423s" Dec 12 17:25:39.354374 containerd[1693]: time="2025-12-12T17:25:39.354360229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:25:39.359185 containerd[1693]: time="2025-12-12T17:25:39.359143174Z" level=info msg="CreateContainer within sandbox \"b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:25:39.370872 containerd[1693]: time="2025-12-12T17:25:39.370359351Z" level=info msg="Container 29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:39.380978 containerd[1693]: time="2025-12-12T17:25:39.380929644Z" level=info msg="CreateContainer within sandbox \"b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5\"" Dec 12 17:25:39.381693 containerd[1693]: time="2025-12-12T17:25:39.381463927Z" level=info msg="StartContainer for \"29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5\"" Dec 12 17:25:39.383198 containerd[1693]: time="2025-12-12T17:25:39.383167416Z" level=info msg="connecting to shim 29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5" address="unix:///run/containerd/s/f2b8f6728a493d4502cc07e0fa855bd61cb28ca58bf4a6eea45954497ad08b1e" protocol=ttrpc version=3 Dec 12 17:25:39.406101 systemd[1]: Started cri-containerd-29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5.scope - libcontainer container 29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5. Dec 12 17:25:39.450000 audit: BPF prog-id=174 op=LOAD Dec 12 17:25:39.453217 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 12 17:25:39.453277 kernel: audit: type=1334 audit(1765560339.450:580): prog-id=174 op=LOAD Dec 12 17:25:39.450000 audit[3712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3453 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.456722 kernel: audit: type=1300 audit(1765560339.450:580): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3453 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646266656533383165633931333461656238336439303762303034 Dec 12 17:25:39.450000 audit: BPF prog-id=175 op=LOAD Dec 12 17:25:39.460335 kernel: audit: type=1327 audit(1765560339.450:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646266656533383165633931333461656238336439303762303034 Dec 12 17:25:39.460461 kernel: audit: type=1334 audit(1765560339.450:581): prog-id=175 op=LOAD Dec 12 17:25:39.460485 kernel: audit: type=1300 audit(1765560339.450:581): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3453 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.450000 audit[3712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3453 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646266656533383165633931333461656238336439303762303034 Dec 12 17:25:39.466686 kernel: audit: type=1327 audit(1765560339.450:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646266656533383165633931333461656238336439303762303034 Dec 12 17:25:39.450000 audit: BPF prog-id=175 op=UNLOAD Dec 12 17:25:39.450000 audit[3712]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.470403 kernel: audit: type=1334 audit(1765560339.450:582): prog-id=175 op=UNLOAD Dec 12 17:25:39.470468 kernel: audit: type=1300 audit(1765560339.450:582): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646266656533383165633931333461656238336439303762303034 Dec 12 17:25:39.473475 kernel: audit: type=1327 audit(1765560339.450:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646266656533383165633931333461656238336439303762303034 Dec 12 17:25:39.473691 kernel: audit: type=1334 audit(1765560339.450:583): prog-id=174 op=UNLOAD Dec 12 17:25:39.450000 audit: BPF prog-id=174 op=UNLOAD Dec 12 17:25:39.450000 audit[3712]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646266656533383165633931333461656238336439303762303034 Dec 12 17:25:39.450000 audit: BPF prog-id=176 op=LOAD Dec 12 17:25:39.450000 audit[3712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3453 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646266656533383165633931333461656238336439303762303034 Dec 12 17:25:39.492253 containerd[1693]: time="2025-12-12T17:25:39.492193210Z" level=info msg="StartContainer for \"29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5\" returns successfully" Dec 12 17:25:39.974045 kubelet[2914]: E1212 17:25:39.973979 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:25:40.756803 systemd[1]: cri-containerd-29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5.scope: Deactivated successfully. Dec 12 17:25:40.757269 systemd[1]: cri-containerd-29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5.scope: Consumed 458ms CPU time, 187.7M memory peak, 165.9M written to disk. Dec 12 17:25:40.759696 containerd[1693]: time="2025-12-12T17:25:40.759644930Z" level=info msg="received container exit event container_id:\"29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5\" id:\"29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5\" pid:3725 exited_at:{seconds:1765560340 nanos:759332288}" Dec 12 17:25:40.762000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:25:40.776654 kubelet[2914]: I1212 17:25:40.776379 2914 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:25:40.786010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29dbfee381ec9134aeb83d907b00466deadfac2b539d2269a6d3bb9c5c9db8b5-rootfs.mount: Deactivated successfully. Dec 12 17:25:42.217250 systemd[1]: Created slice kubepods-burstable-pod936cef33_28fa_419e_841a_7a41dd1f707c.slice - libcontainer container kubepods-burstable-pod936cef33_28fa_419e_841a_7a41dd1f707c.slice. Dec 12 17:25:42.227358 systemd[1]: Created slice kubepods-besteffort-pod38591688_bf7e_4006_99b5_49217c275f18.slice - libcontainer container kubepods-besteffort-pod38591688_bf7e_4006_99b5_49217c275f18.slice. Dec 12 17:25:42.233384 containerd[1693]: time="2025-12-12T17:25:42.233211417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nmqkr,Uid:38591688-bf7e-4006-99b5-49217c275f18,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:42.234975 systemd[1]: Created slice kubepods-burstable-pod5a0340c8_0c30_428c_a0be_59a60fca6418.slice - libcontainer container kubepods-burstable-pod5a0340c8_0c30_428c_a0be_59a60fca6418.slice. Dec 12 17:25:42.250256 systemd[1]: Created slice kubepods-besteffort-pod482c1729_f8e9_4de5_b13e_dc7cf991c00d.slice - libcontainer container kubepods-besteffort-pod482c1729_f8e9_4de5_b13e_dc7cf991c00d.slice. Dec 12 17:25:42.265980 systemd[1]: Created slice kubepods-besteffort-pod4f0bba68_28b8_4297_9bde_8061a24d85f6.slice - libcontainer container kubepods-besteffort-pod4f0bba68_28b8_4297_9bde_8061a24d85f6.slice. Dec 12 17:25:42.272347 systemd[1]: Created slice kubepods-besteffort-pod96a8850e_ec22_48ad_ae30_8b21eeca5a2c.slice - libcontainer container kubepods-besteffort-pod96a8850e_ec22_48ad_ae30_8b21eeca5a2c.slice. Dec 12 17:25:42.280268 systemd[1]: Created slice kubepods-besteffort-pod9c61390d_c42e_46b3_8a0d_2bc07904de15.slice - libcontainer container kubepods-besteffort-pod9c61390d_c42e_46b3_8a0d_2bc07904de15.slice. Dec 12 17:25:42.284709 systemd[1]: Created slice kubepods-besteffort-pod20a0a748_4991_4e73_9050_b5da442006d5.slice - libcontainer container kubepods-besteffort-pod20a0a748_4991_4e73_9050_b5da442006d5.slice. Dec 12 17:25:42.296591 containerd[1693]: time="2025-12-12T17:25:42.296548459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:25:42.342014 containerd[1693]: time="2025-12-12T17:25:42.341950809Z" level=error msg="Failed to destroy network for sandbox \"b2e2ceab24780ed76df18b5bce407591c80f0c55aaf4ca3cc3f824cbf74d1aa7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.343538 systemd[1]: run-netns-cni\x2defe8048a\x2da0fe\x2d3bcc\x2d5dca\x2dda08963429db.mount: Deactivated successfully. Dec 12 17:25:42.348150 containerd[1693]: time="2025-12-12T17:25:42.348045280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nmqkr,Uid:38591688-bf7e-4006-99b5-49217c275f18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e2ceab24780ed76df18b5bce407591c80f0c55aaf4ca3cc3f824cbf74d1aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.348543 kubelet[2914]: E1212 17:25:42.348497 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e2ceab24780ed76df18b5bce407591c80f0c55aaf4ca3cc3f824cbf74d1aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.349166 kubelet[2914]: E1212 17:25:42.348575 2914 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e2ceab24780ed76df18b5bce407591c80f0c55aaf4ca3cc3f824cbf74d1aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nmqkr" Dec 12 17:25:42.349166 kubelet[2914]: E1212 17:25:42.348597 2914 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e2ceab24780ed76df18b5bce407591c80f0c55aaf4ca3cc3f824cbf74d1aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nmqkr" Dec 12 17:25:42.349166 kubelet[2914]: E1212 17:25:42.348646 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2e2ceab24780ed76df18b5bce407591c80f0c55aaf4ca3cc3f824cbf74d1aa7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:25:42.388321 kubelet[2914]: I1212 17:25:42.388271 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c61390d-c42e-46b3-8a0d-2bc07904de15-config\") pod \"goldmane-666569f655-pjdtp\" (UID: \"9c61390d-c42e-46b3-8a0d-2bc07904de15\") " pod="calico-system/goldmane-666569f655-pjdtp" Dec 12 17:25:42.388621 kubelet[2914]: I1212 17:25:42.388487 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20a0a748-4991-4e73-9050-b5da442006d5-whisker-backend-key-pair\") pod \"whisker-676b88df98-smd46\" (UID: \"20a0a748-4991-4e73-9050-b5da442006d5\") " pod="calico-system/whisker-676b88df98-smd46" Dec 12 17:25:42.388621 kubelet[2914]: I1212 17:25:42.388515 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9q9\" (UniqueName: \"kubernetes.io/projected/20a0a748-4991-4e73-9050-b5da442006d5-kube-api-access-ch9q9\") pod \"whisker-676b88df98-smd46\" (UID: \"20a0a748-4991-4e73-9050-b5da442006d5\") " pod="calico-system/whisker-676b88df98-smd46" Dec 12 17:25:42.388621 kubelet[2914]: I1212 17:25:42.388552 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c61390d-c42e-46b3-8a0d-2bc07904de15-goldmane-ca-bundle\") pod \"goldmane-666569f655-pjdtp\" (UID: \"9c61390d-c42e-46b3-8a0d-2bc07904de15\") " pod="calico-system/goldmane-666569f655-pjdtp" Dec 12 17:25:42.388621 kubelet[2914]: I1212 17:25:42.388571 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a0340c8-0c30-428c-a0be-59a60fca6418-config-volume\") pod \"coredns-674b8bbfcf-txwdj\" (UID: \"5a0340c8-0c30-428c-a0be-59a60fca6418\") " pod="kube-system/coredns-674b8bbfcf-txwdj" Dec 12 17:25:42.388621 kubelet[2914]: I1212 17:25:42.388588 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96a8850e-ec22-48ad-ae30-8b21eeca5a2c-calico-apiserver-certs\") pod \"calico-apiserver-5b7dfdfddc-679rm\" (UID: \"96a8850e-ec22-48ad-ae30-8b21eeca5a2c\") " pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" Dec 12 17:25:42.388763 kubelet[2914]: I1212 17:25:42.388634 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htxpt\" (UniqueName: \"kubernetes.io/projected/96a8850e-ec22-48ad-ae30-8b21eeca5a2c-kube-api-access-htxpt\") pod \"calico-apiserver-5b7dfdfddc-679rm\" (UID: \"96a8850e-ec22-48ad-ae30-8b21eeca5a2c\") " pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" Dec 12 17:25:42.388763 kubelet[2914]: I1212 17:25:42.388712 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/482c1729-f8e9-4de5-b13e-dc7cf991c00d-calico-apiserver-certs\") pod \"calico-apiserver-5b7dfdfddc-r69nw\" (UID: \"482c1729-f8e9-4de5-b13e-dc7cf991c00d\") " pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" Dec 12 17:25:42.388763 kubelet[2914]: I1212 17:25:42.388748 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bl7\" (UniqueName: \"kubernetes.io/projected/4f0bba68-28b8-4297-9bde-8061a24d85f6-kube-api-access-49bl7\") pod \"calico-kube-controllers-6bb58f788f-rt28s\" (UID: \"4f0bba68-28b8-4297-9bde-8061a24d85f6\") " pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" Dec 12 17:25:42.389173 kubelet[2914]: I1212 17:25:42.388812 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdf7\" (UniqueName: \"kubernetes.io/projected/936cef33-28fa-419e-841a-7a41dd1f707c-kube-api-access-vxdf7\") pod \"coredns-674b8bbfcf-4xpx4\" (UID: \"936cef33-28fa-419e-841a-7a41dd1f707c\") " pod="kube-system/coredns-674b8bbfcf-4xpx4" Dec 12 17:25:42.389173 kubelet[2914]: I1212 17:25:42.388921 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsx5q\" (UniqueName: \"kubernetes.io/projected/9c61390d-c42e-46b3-8a0d-2bc07904de15-kube-api-access-rsx5q\") pod \"goldmane-666569f655-pjdtp\" (UID: \"9c61390d-c42e-46b3-8a0d-2bc07904de15\") " pod="calico-system/goldmane-666569f655-pjdtp" Dec 12 17:25:42.389173 kubelet[2914]: I1212 17:25:42.388945 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/936cef33-28fa-419e-841a-7a41dd1f707c-config-volume\") pod \"coredns-674b8bbfcf-4xpx4\" (UID: \"936cef33-28fa-419e-841a-7a41dd1f707c\") " pod="kube-system/coredns-674b8bbfcf-4xpx4" Dec 12 17:25:42.389173 kubelet[2914]: I1212 17:25:42.388964 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gntvq\" (UniqueName: \"kubernetes.io/projected/482c1729-f8e9-4de5-b13e-dc7cf991c00d-kube-api-access-gntvq\") pod \"calico-apiserver-5b7dfdfddc-r69nw\" (UID: \"482c1729-f8e9-4de5-b13e-dc7cf991c00d\") " pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" Dec 12 17:25:42.389173 kubelet[2914]: I1212 17:25:42.388981 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20a0a748-4991-4e73-9050-b5da442006d5-whisker-ca-bundle\") pod \"whisker-676b88df98-smd46\" (UID: \"20a0a748-4991-4e73-9050-b5da442006d5\") " pod="calico-system/whisker-676b88df98-smd46" Dec 12 17:25:42.389307 kubelet[2914]: I1212 17:25:42.388996 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9c61390d-c42e-46b3-8a0d-2bc07904de15-goldmane-key-pair\") pod \"goldmane-666569f655-pjdtp\" (UID: \"9c61390d-c42e-46b3-8a0d-2bc07904de15\") " pod="calico-system/goldmane-666569f655-pjdtp" Dec 12 17:25:42.389307 kubelet[2914]: I1212 17:25:42.389012 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwz8t\" (UniqueName: \"kubernetes.io/projected/5a0340c8-0c30-428c-a0be-59a60fca6418-kube-api-access-bwz8t\") pod \"coredns-674b8bbfcf-txwdj\" (UID: \"5a0340c8-0c30-428c-a0be-59a60fca6418\") " pod="kube-system/coredns-674b8bbfcf-txwdj" Dec 12 17:25:42.389307 kubelet[2914]: I1212 17:25:42.389045 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f0bba68-28b8-4297-9bde-8061a24d85f6-tigera-ca-bundle\") pod \"calico-kube-controllers-6bb58f788f-rt28s\" (UID: \"4f0bba68-28b8-4297-9bde-8061a24d85f6\") " pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" Dec 12 17:25:42.521785 containerd[1693]: time="2025-12-12T17:25:42.521612882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4xpx4,Uid:936cef33-28fa-419e-841a-7a41dd1f707c,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:42.552914 containerd[1693]: time="2025-12-12T17:25:42.551308833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txwdj,Uid:5a0340c8-0c30-428c-a0be-59a60fca6418,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:42.558964 containerd[1693]: time="2025-12-12T17:25:42.558925312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7dfdfddc-r69nw,Uid:482c1729-f8e9-4de5-b13e-dc7cf991c00d,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:25:42.568635 containerd[1693]: time="2025-12-12T17:25:42.568567921Z" level=error msg="Failed to destroy network for sandbox \"8094ecc1d7e02cea375b78a544528c7e886d2d884d7b9610169a6757d091580e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.571059 containerd[1693]: time="2025-12-12T17:25:42.571021653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb58f788f-rt28s,Uid:4f0bba68-28b8-4297-9bde-8061a24d85f6,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:42.574792 containerd[1693]: time="2025-12-12T17:25:42.574765032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7dfdfddc-679rm,Uid:96a8850e-ec22-48ad-ae30-8b21eeca5a2c,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:25:42.575214 containerd[1693]: time="2025-12-12T17:25:42.575180674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4xpx4,Uid:936cef33-28fa-419e-841a-7a41dd1f707c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8094ecc1d7e02cea375b78a544528c7e886d2d884d7b9610169a6757d091580e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.575543 kubelet[2914]: E1212 17:25:42.575509 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8094ecc1d7e02cea375b78a544528c7e886d2d884d7b9610169a6757d091580e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.575616 kubelet[2914]: E1212 17:25:42.575566 2914 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8094ecc1d7e02cea375b78a544528c7e886d2d884d7b9610169a6757d091580e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4xpx4" Dec 12 17:25:42.575616 kubelet[2914]: E1212 17:25:42.575588 2914 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8094ecc1d7e02cea375b78a544528c7e886d2d884d7b9610169a6757d091580e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4xpx4" Dec 12 17:25:42.575671 kubelet[2914]: E1212 17:25:42.575627 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4xpx4_kube-system(936cef33-28fa-419e-841a-7a41dd1f707c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4xpx4_kube-system(936cef33-28fa-419e-841a-7a41dd1f707c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8094ecc1d7e02cea375b78a544528c7e886d2d884d7b9610169a6757d091580e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4xpx4" podUID="936cef33-28fa-419e-841a-7a41dd1f707c" Dec 12 17:25:42.584474 containerd[1693]: time="2025-12-12T17:25:42.584371201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pjdtp,Uid:9c61390d-c42e-46b3-8a0d-2bc07904de15,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:42.589935 containerd[1693]: time="2025-12-12T17:25:42.589887789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-676b88df98-smd46,Uid:20a0a748-4991-4e73-9050-b5da442006d5,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:42.621139 containerd[1693]: time="2025-12-12T17:25:42.621028987Z" level=error msg="Failed to destroy network for sandbox \"19d9452143b677b564d17a1122f88966d99207f333e1e36c5a5b0793ef511c9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.623304 containerd[1693]: time="2025-12-12T17:25:42.623222038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txwdj,Uid:5a0340c8-0c30-428c-a0be-59a60fca6418,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d9452143b677b564d17a1122f88966d99207f333e1e36c5a5b0793ef511c9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.623984 kubelet[2914]: E1212 17:25:42.623443 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d9452143b677b564d17a1122f88966d99207f333e1e36c5a5b0793ef511c9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.623984 kubelet[2914]: E1212 17:25:42.623500 2914 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d9452143b677b564d17a1122f88966d99207f333e1e36c5a5b0793ef511c9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-txwdj" Dec 12 17:25:42.623984 kubelet[2914]: E1212 17:25:42.623520 2914 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d9452143b677b564d17a1122f88966d99207f333e1e36c5a5b0793ef511c9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-txwdj" Dec 12 17:25:42.624086 kubelet[2914]: E1212 17:25:42.623563 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-txwdj_kube-system(5a0340c8-0c30-428c-a0be-59a60fca6418)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-txwdj_kube-system(5a0340c8-0c30-428c-a0be-59a60fca6418)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19d9452143b677b564d17a1122f88966d99207f333e1e36c5a5b0793ef511c9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-txwdj" podUID="5a0340c8-0c30-428c-a0be-59a60fca6418" Dec 12 17:25:42.654618 containerd[1693]: time="2025-12-12T17:25:42.654541078Z" level=error msg="Failed to destroy network for sandbox \"fc9bc70a5ea06f5025ecbf5bfc6f0d4a521ed1929e1ea68fdf4628b45149ecf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.657168 containerd[1693]: time="2025-12-12T17:25:42.657057770Z" level=error msg="Failed to destroy network for sandbox \"099f32b6d060f6d094d5a76554689c4c62cf6cd039d1ed4467f3b6f956155f15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.657369 containerd[1693]: time="2025-12-12T17:25:42.657332132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7dfdfddc-679rm,Uid:96a8850e-ec22-48ad-ae30-8b21eeca5a2c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9bc70a5ea06f5025ecbf5bfc6f0d4a521ed1929e1ea68fdf4628b45149ecf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.658045 kubelet[2914]: E1212 17:25:42.657640 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9bc70a5ea06f5025ecbf5bfc6f0d4a521ed1929e1ea68fdf4628b45149ecf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.658045 kubelet[2914]: E1212 17:25:42.657707 2914 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9bc70a5ea06f5025ecbf5bfc6f0d4a521ed1929e1ea68fdf4628b45149ecf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" Dec 12 17:25:42.658045 kubelet[2914]: E1212 17:25:42.657727 2914 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9bc70a5ea06f5025ecbf5bfc6f0d4a521ed1929e1ea68fdf4628b45149ecf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" Dec 12 17:25:42.658191 kubelet[2914]: E1212 17:25:42.657779 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b7dfdfddc-679rm_calico-apiserver(96a8850e-ec22-48ad-ae30-8b21eeca5a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b7dfdfddc-679rm_calico-apiserver(96a8850e-ec22-48ad-ae30-8b21eeca5a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc9bc70a5ea06f5025ecbf5bfc6f0d4a521ed1929e1ea68fdf4628b45149ecf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:25:42.659453 containerd[1693]: time="2025-12-12T17:25:42.659406382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7dfdfddc-r69nw,Uid:482c1729-f8e9-4de5-b13e-dc7cf991c00d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"099f32b6d060f6d094d5a76554689c4c62cf6cd039d1ed4467f3b6f956155f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.659640 kubelet[2914]: E1212 17:25:42.659602 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"099f32b6d060f6d094d5a76554689c4c62cf6cd039d1ed4467f3b6f956155f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.659706 kubelet[2914]: E1212 17:25:42.659655 2914 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"099f32b6d060f6d094d5a76554689c4c62cf6cd039d1ed4467f3b6f956155f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" Dec 12 17:25:42.659706 kubelet[2914]: E1212 17:25:42.659675 2914 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"099f32b6d060f6d094d5a76554689c4c62cf6cd039d1ed4467f3b6f956155f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" Dec 12 17:25:42.659766 kubelet[2914]: E1212 17:25:42.659710 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b7dfdfddc-r69nw_calico-apiserver(482c1729-f8e9-4de5-b13e-dc7cf991c00d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b7dfdfddc-r69nw_calico-apiserver(482c1729-f8e9-4de5-b13e-dc7cf991c00d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"099f32b6d060f6d094d5a76554689c4c62cf6cd039d1ed4467f3b6f956155f15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:25:42.669417 containerd[1693]: time="2025-12-12T17:25:42.669368873Z" level=error msg="Failed to destroy network for sandbox \"306d5ea747391766726ba54cbfd016c0dc49d46b7dbfb04033a468741c3d8517\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.672811 containerd[1693]: time="2025-12-12T17:25:42.672762730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb58f788f-rt28s,Uid:4f0bba68-28b8-4297-9bde-8061a24d85f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"306d5ea747391766726ba54cbfd016c0dc49d46b7dbfb04033a468741c3d8517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.673250 kubelet[2914]: E1212 17:25:42.673215 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"306d5ea747391766726ba54cbfd016c0dc49d46b7dbfb04033a468741c3d8517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.673387 kubelet[2914]: E1212 17:25:42.673365 2914 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"306d5ea747391766726ba54cbfd016c0dc49d46b7dbfb04033a468741c3d8517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" Dec 12 17:25:42.673494 kubelet[2914]: E1212 17:25:42.673475 2914 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"306d5ea747391766726ba54cbfd016c0dc49d46b7dbfb04033a468741c3d8517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" Dec 12 17:25:42.673640 kubelet[2914]: E1212 17:25:42.673594 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bb58f788f-rt28s_calico-system(4f0bba68-28b8-4297-9bde-8061a24d85f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bb58f788f-rt28s_calico-system(4f0bba68-28b8-4297-9bde-8061a24d85f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"306d5ea747391766726ba54cbfd016c0dc49d46b7dbfb04033a468741c3d8517\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:25:42.679056 containerd[1693]: time="2025-12-12T17:25:42.679018722Z" level=error msg="Failed to destroy network for sandbox \"e193fa626eb6ec2eae4009fb06dde911afed0522ef17494f6cefe5ab8ddc1b5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.681062 containerd[1693]: time="2025-12-12T17:25:42.681024332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-676b88df98-smd46,Uid:20a0a748-4991-4e73-9050-b5da442006d5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e193fa626eb6ec2eae4009fb06dde911afed0522ef17494f6cefe5ab8ddc1b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.681258 kubelet[2914]: E1212 17:25:42.681226 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e193fa626eb6ec2eae4009fb06dde911afed0522ef17494f6cefe5ab8ddc1b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.681326 kubelet[2914]: E1212 17:25:42.681281 2914 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e193fa626eb6ec2eae4009fb06dde911afed0522ef17494f6cefe5ab8ddc1b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-676b88df98-smd46" Dec 12 17:25:42.681326 kubelet[2914]: E1212 17:25:42.681301 2914 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e193fa626eb6ec2eae4009fb06dde911afed0522ef17494f6cefe5ab8ddc1b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-676b88df98-smd46" Dec 12 17:25:42.681375 kubelet[2914]: E1212 17:25:42.681349 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-676b88df98-smd46_calico-system(20a0a748-4991-4e73-9050-b5da442006d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-676b88df98-smd46_calico-system(20a0a748-4991-4e73-9050-b5da442006d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e193fa626eb6ec2eae4009fb06dde911afed0522ef17494f6cefe5ab8ddc1b5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-676b88df98-smd46" podUID="20a0a748-4991-4e73-9050-b5da442006d5" Dec 12 17:25:42.686630 containerd[1693]: time="2025-12-12T17:25:42.686586680Z" level=error msg="Failed to destroy network for sandbox \"229955ebddf684eb9b40621381ae79620052d613308cd908d371001d1e0a7ca6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.690901 containerd[1693]: time="2025-12-12T17:25:42.690825462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pjdtp,Uid:9c61390d-c42e-46b3-8a0d-2bc07904de15,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"229955ebddf684eb9b40621381ae79620052d613308cd908d371001d1e0a7ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.691102 kubelet[2914]: E1212 17:25:42.691061 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"229955ebddf684eb9b40621381ae79620052d613308cd908d371001d1e0a7ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:25:42.691167 kubelet[2914]: E1212 17:25:42.691123 2914 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"229955ebddf684eb9b40621381ae79620052d613308cd908d371001d1e0a7ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pjdtp" Dec 12 17:25:42.691167 kubelet[2914]: E1212 17:25:42.691145 2914 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"229955ebddf684eb9b40621381ae79620052d613308cd908d371001d1e0a7ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pjdtp" Dec 12 17:25:42.691234 kubelet[2914]: E1212 17:25:42.691196 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pjdtp_calico-system(9c61390d-c42e-46b3-8a0d-2bc07904de15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pjdtp_calico-system(9c61390d-c42e-46b3-8a0d-2bc07904de15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"229955ebddf684eb9b40621381ae79620052d613308cd908d371001d1e0a7ca6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:25:49.532814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount960580785.mount: Deactivated successfully. Dec 12 17:25:49.550986 containerd[1693]: time="2025-12-12T17:25:49.550936758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:49.551846 containerd[1693]: time="2025-12-12T17:25:49.551771562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:25:49.552803 containerd[1693]: time="2025-12-12T17:25:49.552770127Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:49.554865 containerd[1693]: time="2025-12-12T17:25:49.554812338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:49.555794 containerd[1693]: time="2025-12-12T17:25:49.555403861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.258815042s" Dec 12 17:25:49.555794 containerd[1693]: time="2025-12-12T17:25:49.555429901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:25:49.572136 containerd[1693]: time="2025-12-12T17:25:49.572085146Z" level=info msg="CreateContainer within sandbox \"b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:25:49.586851 containerd[1693]: time="2025-12-12T17:25:49.585876496Z" level=info msg="Container 0b0ed877b34133d7d0b3f9340281428c849cc2f8dcdcc20630d75012c9e09134: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:49.598609 containerd[1693]: time="2025-12-12T17:25:49.598565040Z" level=info msg="CreateContainer within sandbox \"b713c99ed2729d937485a69aa121a864dd90334abb3fcafeca91ffddb35b7768\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0b0ed877b34133d7d0b3f9340281428c849cc2f8dcdcc20630d75012c9e09134\"" Dec 12 17:25:49.599746 containerd[1693]: time="2025-12-12T17:25:49.599706806Z" level=info msg="StartContainer for \"0b0ed877b34133d7d0b3f9340281428c849cc2f8dcdcc20630d75012c9e09134\"" Dec 12 17:25:49.601761 containerd[1693]: time="2025-12-12T17:25:49.601679016Z" level=info msg="connecting to shim 0b0ed877b34133d7d0b3f9340281428c849cc2f8dcdcc20630d75012c9e09134" address="unix:///run/containerd/s/f2b8f6728a493d4502cc07e0fa855bd61cb28ca58bf4a6eea45954497ad08b1e" protocol=ttrpc version=3 Dec 12 17:25:49.624194 systemd[1]: Started cri-containerd-0b0ed877b34133d7d0b3f9340281428c849cc2f8dcdcc20630d75012c9e09134.scope - libcontainer container 0b0ed877b34133d7d0b3f9340281428c849cc2f8dcdcc20630d75012c9e09134. Dec 12 17:25:49.695000 audit: BPF prog-id=177 op=LOAD Dec 12 17:25:49.697414 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 12 17:25:49.697471 kernel: audit: type=1334 audit(1765560349.695:586): prog-id=177 op=LOAD Dec 12 17:25:49.695000 audit[4033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3453 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:49.701061 kernel: audit: type=1300 audit(1765560349.695:586): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3453 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:49.701106 kernel: audit: type=1327 audit(1765560349.695:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306564383737623334313333643764306233663933343032383134 Dec 12 17:25:49.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306564383737623334313333643764306233663933343032383134 Dec 12 17:25:49.695000 audit: BPF prog-id=178 op=LOAD Dec 12 17:25:49.704989 kernel: audit: type=1334 audit(1765560349.695:587): prog-id=178 op=LOAD Dec 12 17:25:49.695000 audit[4033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3453 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:49.708040 kernel: audit: type=1300 audit(1765560349.695:587): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3453 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:49.708195 kernel: audit: type=1327 audit(1765560349.695:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306564383737623334313333643764306233663933343032383134 Dec 12 17:25:49.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306564383737623334313333643764306233663933343032383134 Dec 12 17:25:49.695000 audit: BPF prog-id=178 op=UNLOAD Dec 12 17:25:49.695000 audit[4033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:49.714564 kernel: audit: type=1334 audit(1765560349.695:588): prog-id=178 op=UNLOAD Dec 12 17:25:49.714628 kernel: audit: type=1300 audit(1765560349.695:588): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:49.714648 kernel: audit: type=1327 audit(1765560349.695:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306564383737623334313333643764306233663933343032383134 Dec 12 17:25:49.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306564383737623334313333643764306233663933343032383134 Dec 12 17:25:49.695000 audit: BPF prog-id=177 op=UNLOAD Dec 12 17:25:49.718224 kernel: audit: type=1334 audit(1765560349.695:589): prog-id=177 op=UNLOAD Dec 12 17:25:49.695000 audit[4033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3453 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:49.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306564383737623334313333643764306233663933343032383134 Dec 12 17:25:49.695000 audit: BPF prog-id=179 op=LOAD Dec 12 17:25:49.695000 audit[4033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3453 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:49.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306564383737623334313333643764306233663933343032383134 Dec 12 17:25:49.729651 containerd[1693]: time="2025-12-12T17:25:49.729617706Z" level=info msg="StartContainer for \"0b0ed877b34133d7d0b3f9340281428c849cc2f8dcdcc20630d75012c9e09134\" returns successfully" Dec 12 17:25:49.873075 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:25:49.873243 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:25:50.045144 kubelet[2914]: I1212 17:25:50.045075 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20a0a748-4991-4e73-9050-b5da442006d5-whisker-backend-key-pair\") pod \"20a0a748-4991-4e73-9050-b5da442006d5\" (UID: \"20a0a748-4991-4e73-9050-b5da442006d5\") " Dec 12 17:25:50.045144 kubelet[2914]: I1212 17:25:50.045130 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20a0a748-4991-4e73-9050-b5da442006d5-whisker-ca-bundle\") pod \"20a0a748-4991-4e73-9050-b5da442006d5\" (UID: \"20a0a748-4991-4e73-9050-b5da442006d5\") " Dec 12 17:25:50.045144 kubelet[2914]: I1212 17:25:50.045157 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch9q9\" (UniqueName: \"kubernetes.io/projected/20a0a748-4991-4e73-9050-b5da442006d5-kube-api-access-ch9q9\") pod \"20a0a748-4991-4e73-9050-b5da442006d5\" (UID: \"20a0a748-4991-4e73-9050-b5da442006d5\") " Dec 12 17:25:50.046286 kubelet[2914]: I1212 17:25:50.045552 2914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20a0a748-4991-4e73-9050-b5da442006d5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "20a0a748-4991-4e73-9050-b5da442006d5" (UID: "20a0a748-4991-4e73-9050-b5da442006d5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:25:50.048313 kubelet[2914]: I1212 17:25:50.048275 2914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a0a748-4991-4e73-9050-b5da442006d5-kube-api-access-ch9q9" (OuterVolumeSpecName: "kube-api-access-ch9q9") pod "20a0a748-4991-4e73-9050-b5da442006d5" (UID: "20a0a748-4991-4e73-9050-b5da442006d5"). InnerVolumeSpecName "kube-api-access-ch9q9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:25:50.048709 kubelet[2914]: I1212 17:25:50.048678 2914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a0a748-4991-4e73-9050-b5da442006d5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "20a0a748-4991-4e73-9050-b5da442006d5" (UID: "20a0a748-4991-4e73-9050-b5da442006d5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:25:50.146107 kubelet[2914]: I1212 17:25:50.145957 2914 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20a0a748-4991-4e73-9050-b5da442006d5-whisker-backend-key-pair\") on node \"ci-4515-1-0-4-1de611bfb5\" DevicePath \"\"" Dec 12 17:25:50.146107 kubelet[2914]: I1212 17:25:50.145995 2914 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20a0a748-4991-4e73-9050-b5da442006d5-whisker-ca-bundle\") on node \"ci-4515-1-0-4-1de611bfb5\" DevicePath \"\"" Dec 12 17:25:50.146107 kubelet[2914]: I1212 17:25:50.146005 2914 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ch9q9\" (UniqueName: \"kubernetes.io/projected/20a0a748-4991-4e73-9050-b5da442006d5-kube-api-access-ch9q9\") on node \"ci-4515-1-0-4-1de611bfb5\" DevicePath \"\"" Dec 12 17:25:50.319879 systemd[1]: Removed slice kubepods-besteffort-pod20a0a748_4991_4e73_9050_b5da442006d5.slice - libcontainer container kubepods-besteffort-pod20a0a748_4991_4e73_9050_b5da442006d5.slice. Dec 12 17:25:50.333781 kubelet[2914]: I1212 17:25:50.333709 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-brsc9" podStartSLOduration=2.100014809 podStartE2EDuration="21.333693335s" podCreationTimestamp="2025-12-12 17:25:29 +0000 UTC" firstStartedPulling="2025-12-12 17:25:30.322393658 +0000 UTC m=+23.431246824" lastFinishedPulling="2025-12-12 17:25:49.556072184 +0000 UTC m=+42.664925350" observedRunningTime="2025-12-12 17:25:50.333195613 +0000 UTC m=+43.442048779" watchObservedRunningTime="2025-12-12 17:25:50.333693335 +0000 UTC m=+43.442546501" Dec 12 17:25:50.399385 systemd[1]: Created slice kubepods-besteffort-pod1b458fc3_1075_4699_aeb2_3b97525fad3c.slice - libcontainer container kubepods-besteffort-pod1b458fc3_1075_4699_aeb2_3b97525fad3c.slice. Dec 12 17:25:50.448694 kubelet[2914]: I1212 17:25:50.448599 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1b458fc3-1075-4699-aeb2-3b97525fad3c-whisker-backend-key-pair\") pod \"whisker-6568b8ffc6-2jn5v\" (UID: \"1b458fc3-1075-4699-aeb2-3b97525fad3c\") " pod="calico-system/whisker-6568b8ffc6-2jn5v" Dec 12 17:25:50.448694 kubelet[2914]: I1212 17:25:50.448654 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b458fc3-1075-4699-aeb2-3b97525fad3c-whisker-ca-bundle\") pod \"whisker-6568b8ffc6-2jn5v\" (UID: \"1b458fc3-1075-4699-aeb2-3b97525fad3c\") " pod="calico-system/whisker-6568b8ffc6-2jn5v" Dec 12 17:25:50.448694 kubelet[2914]: I1212 17:25:50.448679 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8mq\" (UniqueName: \"kubernetes.io/projected/1b458fc3-1075-4699-aeb2-3b97525fad3c-kube-api-access-5r8mq\") pod \"whisker-6568b8ffc6-2jn5v\" (UID: \"1b458fc3-1075-4699-aeb2-3b97525fad3c\") " pod="calico-system/whisker-6568b8ffc6-2jn5v" Dec 12 17:25:50.533727 systemd[1]: var-lib-kubelet-pods-20a0a748\x2d4991\x2d4e73\x2d9050\x2db5da442006d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dch9q9.mount: Deactivated successfully. Dec 12 17:25:50.533827 systemd[1]: var-lib-kubelet-pods-20a0a748\x2d4991\x2d4e73\x2d9050\x2db5da442006d5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:25:50.702798 containerd[1693]: time="2025-12-12T17:25:50.702681090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6568b8ffc6-2jn5v,Uid:1b458fc3-1075-4699-aeb2-3b97525fad3c,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:50.939481 systemd-networkd[1605]: calia5702298808: Link UP Dec 12 17:25:50.939698 systemd-networkd[1605]: calia5702298808: Gained carrier Dec 12 17:25:50.952954 containerd[1693]: 2025-12-12 17:25:50.728 [INFO][4098] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:25:50.952954 containerd[1693]: 2025-12-12 17:25:50.747 [INFO][4098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0 whisker-6568b8ffc6- calico-system 1b458fc3-1075-4699-aeb2-3b97525fad3c 934 0 2025-12-12 17:25:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6568b8ffc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515-1-0-4-1de611bfb5 whisker-6568b8ffc6-2jn5v eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia5702298808 [] [] }} ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Namespace="calico-system" Pod="whisker-6568b8ffc6-2jn5v" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-" Dec 12 17:25:50.952954 containerd[1693]: 2025-12-12 17:25:50.747 [INFO][4098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Namespace="calico-system" Pod="whisker-6568b8ffc6-2jn5v" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" Dec 12 17:25:50.952954 containerd[1693]: 2025-12-12 17:25:50.793 [INFO][4111] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" HandleID="k8s-pod-network.3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Workload="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" Dec 12 17:25:50.953166 containerd[1693]: 2025-12-12 17:25:50.793 [INFO][4111] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" HandleID="k8s-pod-network.3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Workload="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000429580), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-4-1de611bfb5", "pod":"whisker-6568b8ffc6-2jn5v", "timestamp":"2025-12-12 17:25:50.793064949 +0000 UTC"}, Hostname:"ci-4515-1-0-4-1de611bfb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:25:50.953166 containerd[1693]: 2025-12-12 17:25:50.793 [INFO][4111] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:25:50.953166 containerd[1693]: 2025-12-12 17:25:50.793 [INFO][4111] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:25:50.953166 containerd[1693]: 2025-12-12 17:25:50.793 [INFO][4111] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-4-1de611bfb5' Dec 12 17:25:50.953166 containerd[1693]: 2025-12-12 17:25:50.804 [INFO][4111] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:50.953166 containerd[1693]: 2025-12-12 17:25:50.810 [INFO][4111] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:50.953166 containerd[1693]: 2025-12-12 17:25:50.907 [INFO][4111] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:50.953166 containerd[1693]: 2025-12-12 17:25:50.910 [INFO][4111] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:50.953166 containerd[1693]: 2025-12-12 17:25:50.912 [INFO][4111] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:50.953344 containerd[1693]: 2025-12-12 17:25:50.913 [INFO][4111] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:50.953344 containerd[1693]: 2025-12-12 17:25:50.915 [INFO][4111] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd Dec 12 17:25:50.953344 containerd[1693]: 2025-12-12 17:25:50.920 [INFO][4111] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:50.953344 containerd[1693]: 2025-12-12 17:25:50.930 [INFO][4111] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.129/26] block=192.168.115.128/26 handle="k8s-pod-network.3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:50.953344 containerd[1693]: 2025-12-12 17:25:50.931 [INFO][4111] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.129/26] handle="k8s-pod-network.3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:50.953344 containerd[1693]: 2025-12-12 17:25:50.931 [INFO][4111] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:25:50.953344 containerd[1693]: 2025-12-12 17:25:50.931 [INFO][4111] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.129/26] IPv6=[] ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" HandleID="k8s-pod-network.3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Workload="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" Dec 12 17:25:50.953480 containerd[1693]: 2025-12-12 17:25:50.933 [INFO][4098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Namespace="calico-system" Pod="whisker-6568b8ffc6-2jn5v" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0", GenerateName:"whisker-6568b8ffc6-", Namespace:"calico-system", SelfLink:"", UID:"1b458fc3-1075-4699-aeb2-3b97525fad3c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6568b8ffc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"", Pod:"whisker-6568b8ffc6-2jn5v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5702298808", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:50.953480 containerd[1693]: 2025-12-12 17:25:50.933 [INFO][4098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.129/32] ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Namespace="calico-system" Pod="whisker-6568b8ffc6-2jn5v" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" Dec 12 17:25:50.953552 containerd[1693]: 2025-12-12 17:25:50.933 [INFO][4098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5702298808 ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Namespace="calico-system" Pod="whisker-6568b8ffc6-2jn5v" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" Dec 12 17:25:50.953552 containerd[1693]: 2025-12-12 17:25:50.939 [INFO][4098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Namespace="calico-system" Pod="whisker-6568b8ffc6-2jn5v" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" Dec 12 17:25:50.953592 containerd[1693]: 2025-12-12 17:25:50.940 [INFO][4098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Namespace="calico-system" Pod="whisker-6568b8ffc6-2jn5v" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0", GenerateName:"whisker-6568b8ffc6-", Namespace:"calico-system", SelfLink:"", UID:"1b458fc3-1075-4699-aeb2-3b97525fad3c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6568b8ffc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd", Pod:"whisker-6568b8ffc6-2jn5v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5702298808", MAC:"66:a5:a3:05:e0:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:50.953642 containerd[1693]: 2025-12-12 17:25:50.950 [INFO][4098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" Namespace="calico-system" Pod="whisker-6568b8ffc6-2jn5v" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-whisker--6568b8ffc6--2jn5v-eth0" Dec 12 17:25:50.978286 kubelet[2914]: I1212 17:25:50.978243 2914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a0a748-4991-4e73-9050-b5da442006d5" path="/var/lib/kubelet/pods/20a0a748-4991-4e73-9050-b5da442006d5/volumes" Dec 12 17:25:50.981859 containerd[1693]: time="2025-12-12T17:25:50.981360226Z" level=info msg="connecting to shim 3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd" address="unix:///run/containerd/s/146afbd5229f794ef8b69820eff5ff28c83e902704d2425da59c62b98a5e4fed" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:51.005074 systemd[1]: Started cri-containerd-3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd.scope - libcontainer container 3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd. Dec 12 17:25:51.014000 audit: BPF prog-id=180 op=LOAD Dec 12 17:25:51.015000 audit: BPF prog-id=181 op=LOAD Dec 12 17:25:51.015000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4138 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:51.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343930386263626438373932313633313663626161643639343931 Dec 12 17:25:51.015000 audit: BPF prog-id=181 op=UNLOAD Dec 12 17:25:51.015000 audit[4149]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4138 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:51.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343930386263626438373932313633313663626161643639343931 Dec 12 17:25:51.015000 audit: BPF prog-id=182 op=LOAD Dec 12 17:25:51.015000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4138 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:51.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343930386263626438373932313633313663626161643639343931 Dec 12 17:25:51.015000 audit: BPF prog-id=183 op=LOAD Dec 12 17:25:51.015000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4138 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:51.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343930386263626438373932313633313663626161643639343931 Dec 12 17:25:51.016000 audit: BPF prog-id=183 op=UNLOAD Dec 12 17:25:51.016000 audit[4149]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4138 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:51.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343930386263626438373932313633313663626161643639343931 Dec 12 17:25:51.016000 audit: BPF prog-id=182 op=UNLOAD Dec 12 17:25:51.016000 audit[4149]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4138 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:51.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343930386263626438373932313633313663626161643639343931 Dec 12 17:25:51.016000 audit: BPF prog-id=184 op=LOAD Dec 12 17:25:51.016000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4138 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:51.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343930386263626438373932313633313663626161643639343931 Dec 12 17:25:51.038639 containerd[1693]: time="2025-12-12T17:25:51.038579557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6568b8ffc6-2jn5v,Uid:1b458fc3-1075-4699-aeb2-3b97525fad3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d4908bcbd879216316cbaad6949181663dbde3e149ef89194260614ac9e68fd\"" Dec 12 17:25:51.040578 containerd[1693]: time="2025-12-12T17:25:51.040549927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:25:51.367581 containerd[1693]: time="2025-12-12T17:25:51.367481188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:51.369259 containerd[1693]: time="2025-12-12T17:25:51.369218117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:25:51.369338 containerd[1693]: time="2025-12-12T17:25:51.369306237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:51.369520 kubelet[2914]: E1212 17:25:51.369483 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:25:51.369906 kubelet[2914]: E1212 17:25:51.369536 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:25:51.369962 kubelet[2914]: E1212 17:25:51.369675 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2ec095a6f82141b6ae474647ccc0bac7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:51.371559 containerd[1693]: time="2025-12-12T17:25:51.371535849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:25:51.686959 containerd[1693]: time="2025-12-12T17:25:51.686804811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:51.688305 containerd[1693]: time="2025-12-12T17:25:51.688206898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:25:51.688305 containerd[1693]: time="2025-12-12T17:25:51.688252578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:51.688454 kubelet[2914]: E1212 17:25:51.688416 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:25:51.688530 kubelet[2914]: E1212 17:25:51.688464 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:25:51.688667 kubelet[2914]: E1212 17:25:51.688583 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:51.689803 kubelet[2914]: E1212 17:25:51.689753 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:25:52.322558 kubelet[2914]: E1212 17:25:52.322455 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:25:52.347000 audit[4298]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:52.347000 audit[4298]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffd111400 a2=0 a3=1 items=0 ppid=3033 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:52.347000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:52.356000 audit[4298]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:52.356000 audit[4298]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffd111400 a2=0 a3=1 items=0 ppid=3033 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:52.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:52.583168 systemd-networkd[1605]: calia5702298808: Gained IPv6LL Dec 12 17:25:53.132798 kubelet[2914]: I1212 17:25:53.132759 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:25:53.166000 audit[4302]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:53.166000 audit[4302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc2fee0b0 a2=0 a3=1 items=0 ppid=3033 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:53.176000 audit[4302]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:53.176000 audit[4302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc2fee0b0 a2=0 a3=1 items=0 ppid=3033 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:53.500000 audit: BPF prog-id=185 op=LOAD Dec 12 17:25:53.500000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0a0d1c8 a2=98 a3=fffff0a0d1b8 items=0 ppid=4305 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.500000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:25:53.500000 audit: BPF prog-id=185 op=UNLOAD Dec 12 17:25:53.500000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff0a0d198 a3=0 items=0 ppid=4305 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.500000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:25:53.500000 audit: BPF prog-id=186 op=LOAD Dec 12 17:25:53.500000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0a0d078 a2=74 a3=95 items=0 ppid=4305 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.500000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:25:53.500000 audit: BPF prog-id=186 op=UNLOAD Dec 12 17:25:53.500000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4305 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.500000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:25:53.500000 audit: BPF prog-id=187 op=LOAD Dec 12 17:25:53.500000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0a0d0a8 a2=40 a3=fffff0a0d0d8 items=0 ppid=4305 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.500000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:25:53.500000 audit: BPF prog-id=187 op=UNLOAD Dec 12 17:25:53.500000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff0a0d0d8 items=0 ppid=4305 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.500000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:25:53.504000 audit: BPF prog-id=188 op=LOAD Dec 12 17:25:53.504000 audit[4344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffd2fba78 a2=98 a3=fffffd2fba68 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.504000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.505000 audit: BPF prog-id=188 op=UNLOAD Dec 12 17:25:53.505000 audit[4344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffd2fba48 a3=0 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.505000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.505000 audit: BPF prog-id=189 op=LOAD Dec 12 17:25:53.505000 audit[4344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffd2fb708 a2=74 a3=95 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.505000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.505000 audit: BPF prog-id=189 op=UNLOAD Dec 12 17:25:53.505000 audit[4344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.505000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.505000 audit: BPF prog-id=190 op=LOAD Dec 12 17:25:53.505000 audit[4344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffd2fb768 a2=94 a3=2 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.505000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.505000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:25:53.505000 audit[4344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.505000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.607000 audit: BPF prog-id=191 op=LOAD Dec 12 17:25:53.607000 audit[4344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffd2fb728 a2=40 a3=fffffd2fb758 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.607000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.607000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:25:53.607000 audit[4344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffffd2fb758 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.607000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.617000 audit: BPF prog-id=192 op=LOAD Dec 12 17:25:53.617000 audit[4344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffd2fb738 a2=94 a3=4 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.618000 audit: BPF prog-id=192 op=UNLOAD Dec 12 17:25:53.618000 audit[4344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.618000 audit: BPF prog-id=193 op=LOAD Dec 12 17:25:53.618000 audit[4344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffd2fb578 a2=94 a3=5 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.618000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:25:53.618000 audit[4344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.618000 audit: BPF prog-id=194 op=LOAD Dec 12 17:25:53.618000 audit[4344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffd2fb7a8 a2=94 a3=6 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.618000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:25:53.618000 audit[4344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.618000 audit: BPF prog-id=195 op=LOAD Dec 12 17:25:53.618000 audit[4344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffd2faf78 a2=94 a3=83 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.618000 audit: BPF prog-id=196 op=LOAD Dec 12 17:25:53.618000 audit[4344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffffd2fad38 a2=94 a3=2 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.619000 audit: BPF prog-id=196 op=UNLOAD Dec 12 17:25:53.619000 audit[4344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.619000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.619000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:25:53.619000 audit[4344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1319c620 a3=1318fb00 items=0 ppid=4305 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.619000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:25:53.628000 audit: BPF prog-id=197 op=LOAD Dec 12 17:25:53.628000 audit[4366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffe9d2588 a2=98 a3=fffffe9d2578 items=0 ppid=4305 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.628000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:25:53.628000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:25:53.628000 audit[4366]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffe9d2558 a3=0 items=0 ppid=4305 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.628000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:25:53.628000 audit: BPF prog-id=198 op=LOAD Dec 12 17:25:53.628000 audit[4366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffe9d2438 a2=74 a3=95 items=0 ppid=4305 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.628000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:25:53.628000 audit: BPF prog-id=198 op=UNLOAD Dec 12 17:25:53.628000 audit[4366]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4305 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.628000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:25:53.628000 audit: BPF prog-id=199 op=LOAD Dec 12 17:25:53.628000 audit[4366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffe9d2468 a2=40 a3=fffffe9d2498 items=0 ppid=4305 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.628000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:25:53.629000 audit: BPF prog-id=199 op=UNLOAD Dec 12 17:25:53.629000 audit[4366]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffffe9d2498 items=0 ppid=4305 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.629000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:25:53.695857 systemd-networkd[1605]: vxlan.calico: Link UP Dec 12 17:25:53.695872 systemd-networkd[1605]: vxlan.calico: Gained carrier Dec 12 17:25:53.721000 audit: BPF prog-id=200 op=LOAD Dec 12 17:25:53.721000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1b90488 a2=98 a3=fffff1b90478 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.721000 audit: BPF prog-id=200 op=UNLOAD Dec 12 17:25:53.721000 audit[4391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff1b90458 a3=0 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.721000 audit: BPF prog-id=201 op=LOAD Dec 12 17:25:53.721000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1b90168 a2=74 a3=95 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.721000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:25:53.721000 audit[4391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.721000 audit: BPF prog-id=202 op=LOAD Dec 12 17:25:53.721000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1b901c8 a2=94 a3=2 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.721000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:25:53.721000 audit[4391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.721000 audit: BPF prog-id=203 op=LOAD Dec 12 17:25:53.721000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff1b90048 a2=40 a3=fffff1b90078 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.721000 audit: BPF prog-id=203 op=UNLOAD Dec 12 17:25:53.721000 audit[4391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff1b90078 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.721000 audit: BPF prog-id=204 op=LOAD Dec 12 17:25:53.721000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff1b90198 a2=94 a3=b7 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.721000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.722000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:25:53.722000 audit[4391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.722000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.722000 audit: BPF prog-id=205 op=LOAD Dec 12 17:25:53.722000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff1b8f848 a2=94 a3=2 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.722000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.722000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:25:53.722000 audit[4391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.722000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.722000 audit: BPF prog-id=206 op=LOAD Dec 12 17:25:53.722000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff1b8f9d8 a2=94 a3=30 items=0 ppid=4305 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.722000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:25:53.725000 audit: BPF prog-id=207 op=LOAD Dec 12 17:25:53.725000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc0e03e8 a2=98 a3=ffffdc0e03d8 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.725000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:25:53.725000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdc0e03b8 a3=0 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.725000 audit: BPF prog-id=208 op=LOAD Dec 12 17:25:53.725000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdc0e0078 a2=74 a3=95 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.725000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:25:53.725000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.725000 audit: BPF prog-id=209 op=LOAD Dec 12 17:25:53.725000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdc0e00d8 a2=94 a3=2 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.725000 audit: BPF prog-id=209 op=UNLOAD Dec 12 17:25:53.725000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.825000 audit: BPF prog-id=210 op=LOAD Dec 12 17:25:53.825000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdc0e0098 a2=40 a3=ffffdc0e00c8 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.825000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:25:53.825000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffdc0e00c8 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.835000 audit: BPF prog-id=211 op=LOAD Dec 12 17:25:53.835000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdc0e00a8 a2=94 a3=4 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.835000 audit: BPF prog-id=211 op=UNLOAD Dec 12 17:25:53.835000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.835000 audit: BPF prog-id=212 op=LOAD Dec 12 17:25:53.835000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdc0dfee8 a2=94 a3=5 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.835000 audit: BPF prog-id=212 op=UNLOAD Dec 12 17:25:53.835000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.835000 audit: BPF prog-id=213 op=LOAD Dec 12 17:25:53.835000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdc0e0118 a2=94 a3=6 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.835000 audit: BPF prog-id=213 op=UNLOAD Dec 12 17:25:53.835000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.835000 audit: BPF prog-id=214 op=LOAD Dec 12 17:25:53.835000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdc0df8e8 a2=94 a3=83 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.836000 audit: BPF prog-id=215 op=LOAD Dec 12 17:25:53.836000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffdc0df6a8 a2=94 a3=2 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.836000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.836000 audit: BPF prog-id=215 op=UNLOAD Dec 12 17:25:53.836000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.836000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.836000 audit: BPF prog-id=214 op=UNLOAD Dec 12 17:25:53.836000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1cd2e620 a3=1cd21b00 items=0 ppid=4305 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.836000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:25:53.849000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:25:53.849000 audit[4305]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=400102a280 a2=0 a3=0 items=0 ppid=4175 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.849000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:25:53.902000 audit[4421]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4421 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:53.902000 audit[4421]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffffcd4f5a0 a2=0 a3=ffff95494fa8 items=0 ppid=4305 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.902000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:53.910000 audit[4427]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4427 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:53.910000 audit[4427]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffffac215d0 a2=0 a3=ffffa790cfa8 items=0 ppid=4305 pid=4427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.910000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:53.910000 audit[4424]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=4424 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:53.910000 audit[4424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe8500040 a2=0 a3=ffffbc969fa8 items=0 ppid=4305 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.910000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:53.910000 audit[4423]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:53.910000 audit[4423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffc9e52a80 a2=0 a3=ffffa222afa8 items=0 ppid=4305 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.910000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:53.974766 containerd[1693]: time="2025-12-12T17:25:53.974712235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7dfdfddc-r69nw,Uid:482c1729-f8e9-4de5-b13e-dc7cf991c00d,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:25:53.975144 containerd[1693]: time="2025-12-12T17:25:53.974714475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nmqkr,Uid:38591688-bf7e-4006-99b5-49217c275f18,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:54.108176 systemd-networkd[1605]: calif7e9e600616: Link UP Dec 12 17:25:54.112421 systemd-networkd[1605]: calif7e9e600616: Gained carrier Dec 12 17:25:54.129372 containerd[1693]: 2025-12-12 17:25:54.020 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0 calico-apiserver-5b7dfdfddc- calico-apiserver 482c1729-f8e9-4de5-b13e-dc7cf991c00d 867 0 2025-12-12 17:25:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b7dfdfddc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-4-1de611bfb5 calico-apiserver-5b7dfdfddc-r69nw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7e9e600616 [] [] }} ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-r69nw" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-" Dec 12 17:25:54.129372 containerd[1693]: 2025-12-12 17:25:54.020 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-r69nw" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" Dec 12 17:25:54.129372 containerd[1693]: 2025-12-12 17:25:54.047 [INFO][4464] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" HandleID="k8s-pod-network.9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Workload="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" Dec 12 17:25:54.129589 containerd[1693]: 2025-12-12 17:25:54.047 [INFO][4464] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" HandleID="k8s-pod-network.9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Workload="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-4-1de611bfb5", "pod":"calico-apiserver-5b7dfdfddc-r69nw", "timestamp":"2025-12-12 17:25:54.047327924 +0000 UTC"}, Hostname:"ci-4515-1-0-4-1de611bfb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:25:54.129589 containerd[1693]: 2025-12-12 17:25:54.047 [INFO][4464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:25:54.129589 containerd[1693]: 2025-12-12 17:25:54.047 [INFO][4464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:25:54.129589 containerd[1693]: 2025-12-12 17:25:54.048 [INFO][4464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-4-1de611bfb5' Dec 12 17:25:54.129589 containerd[1693]: 2025-12-12 17:25:54.057 [INFO][4464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.129589 containerd[1693]: 2025-12-12 17:25:54.061 [INFO][4464] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.129589 containerd[1693]: 2025-12-12 17:25:54.066 [INFO][4464] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.129589 containerd[1693]: 2025-12-12 17:25:54.069 [INFO][4464] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.129589 containerd[1693]: 2025-12-12 17:25:54.071 [INFO][4464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.129763 containerd[1693]: 2025-12-12 17:25:54.072 [INFO][4464] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.129763 containerd[1693]: 2025-12-12 17:25:54.076 [INFO][4464] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285 Dec 12 17:25:54.129763 containerd[1693]: 2025-12-12 17:25:54.081 [INFO][4464] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.129763 containerd[1693]: 2025-12-12 17:25:54.090 [INFO][4464] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.130/26] block=192.168.115.128/26 handle="k8s-pod-network.9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.129763 containerd[1693]: 2025-12-12 17:25:54.090 [INFO][4464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.130/26] handle="k8s-pod-network.9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.129763 containerd[1693]: 2025-12-12 17:25:54.091 [INFO][4464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:25:54.129763 containerd[1693]: 2025-12-12 17:25:54.091 [INFO][4464] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.130/26] IPv6=[] ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" HandleID="k8s-pod-network.9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Workload="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" Dec 12 17:25:54.129925 containerd[1693]: 2025-12-12 17:25:54.097 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-r69nw" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0", GenerateName:"calico-apiserver-5b7dfdfddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"482c1729-f8e9-4de5-b13e-dc7cf991c00d", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7dfdfddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"", Pod:"calico-apiserver-5b7dfdfddc-r69nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7e9e600616", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:54.129980 containerd[1693]: 2025-12-12 17:25:54.097 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.130/32] ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-r69nw" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" Dec 12 17:25:54.129980 containerd[1693]: 2025-12-12 17:25:54.097 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7e9e600616 ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-r69nw" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" Dec 12 17:25:54.129980 containerd[1693]: 2025-12-12 17:25:54.112 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-r69nw" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" Dec 12 17:25:54.130038 containerd[1693]: 2025-12-12 17:25:54.115 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-r69nw" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0", GenerateName:"calico-apiserver-5b7dfdfddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"482c1729-f8e9-4de5-b13e-dc7cf991c00d", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7dfdfddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285", Pod:"calico-apiserver-5b7dfdfddc-r69nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7e9e600616", MAC:"72:a9:91:df:da:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:54.130087 containerd[1693]: 2025-12-12 17:25:54.124 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-r69nw" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--r69nw-eth0" Dec 12 17:25:54.144000 audit[4488]: NETFILTER_CFG table=filter:127 family=2 entries=50 op=nft_register_chain pid=4488 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:54.144000 audit[4488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=fffffefad480 a2=0 a3=ffff80624fa8 items=0 ppid=4305 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.144000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:54.156153 containerd[1693]: time="2025-12-12T17:25:54.156095837Z" level=info msg="connecting to shim 9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285" address="unix:///run/containerd/s/d73a38cc37340f9a68df11efeac3a71ba8660ef48b1d0adb513386e8efe1441b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:54.192488 systemd[1]: Started cri-containerd-9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285.scope - libcontainer container 9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285. Dec 12 17:25:54.195382 systemd-networkd[1605]: cali3a64f0716fc: Link UP Dec 12 17:25:54.196312 systemd-networkd[1605]: cali3a64f0716fc: Gained carrier Dec 12 17:25:54.209000 audit: BPF prog-id=216 op=LOAD Dec 12 17:25:54.211229 containerd[1693]: 2025-12-12 17:25:54.024 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0 csi-node-driver- calico-system 38591688-bf7e-4006-99b5-49217c275f18 756 0 2025-12-12 17:25:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515-1-0-4-1de611bfb5 csi-node-driver-nmqkr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3a64f0716fc [] [] }} ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Namespace="calico-system" Pod="csi-node-driver-nmqkr" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-" Dec 12 17:25:54.211229 containerd[1693]: 2025-12-12 17:25:54.024 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Namespace="calico-system" Pod="csi-node-driver-nmqkr" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" Dec 12 17:25:54.211229 containerd[1693]: 2025-12-12 17:25:54.048 [INFO][4466] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" HandleID="k8s-pod-network.8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Workload="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" Dec 12 17:25:54.211388 containerd[1693]: 2025-12-12 17:25:54.048 [INFO][4466] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" HandleID="k8s-pod-network.8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Workload="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b02b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-4-1de611bfb5", "pod":"csi-node-driver-nmqkr", "timestamp":"2025-12-12 17:25:54.048140689 +0000 UTC"}, Hostname:"ci-4515-1-0-4-1de611bfb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:25:54.211388 containerd[1693]: 2025-12-12 17:25:54.048 [INFO][4466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:25:54.211388 containerd[1693]: 2025-12-12 17:25:54.091 [INFO][4466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:25:54.211388 containerd[1693]: 2025-12-12 17:25:54.095 [INFO][4466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-4-1de611bfb5' Dec 12 17:25:54.211388 containerd[1693]: 2025-12-12 17:25:54.159 [INFO][4466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.211388 containerd[1693]: 2025-12-12 17:25:54.164 [INFO][4466] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.211388 containerd[1693]: 2025-12-12 17:25:54.170 [INFO][4466] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.211388 containerd[1693]: 2025-12-12 17:25:54.172 [INFO][4466] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.211388 containerd[1693]: 2025-12-12 17:25:54.175 [INFO][4466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.211591 containerd[1693]: 2025-12-12 17:25:54.175 [INFO][4466] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.211591 containerd[1693]: 2025-12-12 17:25:54.177 [INFO][4466] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383 Dec 12 17:25:54.211591 containerd[1693]: 2025-12-12 17:25:54.182 [INFO][4466] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.211591 containerd[1693]: 2025-12-12 17:25:54.189 [INFO][4466] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.131/26] block=192.168.115.128/26 handle="k8s-pod-network.8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.211591 containerd[1693]: 2025-12-12 17:25:54.189 [INFO][4466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.131/26] handle="k8s-pod-network.8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:54.211591 containerd[1693]: 2025-12-12 17:25:54.189 [INFO][4466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:25:54.211591 containerd[1693]: 2025-12-12 17:25:54.189 [INFO][4466] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.131/26] IPv6=[] ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" HandleID="k8s-pod-network.8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Workload="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" Dec 12 17:25:54.211715 containerd[1693]: 2025-12-12 17:25:54.191 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Namespace="calico-system" Pod="csi-node-driver-nmqkr" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"38591688-bf7e-4006-99b5-49217c275f18", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"", Pod:"csi-node-driver-nmqkr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a64f0716fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:54.211763 containerd[1693]: 2025-12-12 17:25:54.191 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.131/32] ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Namespace="calico-system" Pod="csi-node-driver-nmqkr" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" Dec 12 17:25:54.211763 containerd[1693]: 2025-12-12 17:25:54.191 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a64f0716fc ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Namespace="calico-system" Pod="csi-node-driver-nmqkr" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" Dec 12 17:25:54.211763 containerd[1693]: 2025-12-12 17:25:54.196 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Namespace="calico-system" Pod="csi-node-driver-nmqkr" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" Dec 12 17:25:54.211000 audit: BPF prog-id=217 op=LOAD Dec 12 17:25:54.211000 audit[4507]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=4496 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964653861303263396330366230623539643833643137356532626232 Dec 12 17:25:54.211000 audit: BPF prog-id=217 op=UNLOAD Dec 12 17:25:54.211000 audit[4507]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4496 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964653861303263396330366230623539643833643137356532626232 Dec 12 17:25:54.212685 containerd[1693]: 2025-12-12 17:25:54.196 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Namespace="calico-system" Pod="csi-node-driver-nmqkr" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"38591688-bf7e-4006-99b5-49217c275f18", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383", Pod:"csi-node-driver-nmqkr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a64f0716fc", MAC:"12:6c:23:ad:61:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:54.212759 containerd[1693]: 2025-12-12 17:25:54.206 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" Namespace="calico-system" Pod="csi-node-driver-nmqkr" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-csi--node--driver--nmqkr-eth0" Dec 12 17:25:54.212000 audit: BPF prog-id=218 op=LOAD Dec 12 17:25:54.212000 audit[4507]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=4496 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964653861303263396330366230623539643833643137356532626232 Dec 12 17:25:54.212000 audit: BPF prog-id=219 op=LOAD Dec 12 17:25:54.212000 audit[4507]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=4496 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964653861303263396330366230623539643833643137356532626232 Dec 12 17:25:54.212000 audit: BPF prog-id=219 op=UNLOAD Dec 12 17:25:54.212000 audit[4507]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4496 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964653861303263396330366230623539643833643137356532626232 Dec 12 17:25:54.213000 audit: BPF prog-id=218 op=UNLOAD Dec 12 17:25:54.213000 audit[4507]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4496 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964653861303263396330366230623539643833643137356532626232 Dec 12 17:25:54.213000 audit: BPF prog-id=220 op=LOAD Dec 12 17:25:54.213000 audit[4507]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=4496 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964653861303263396330366230623539643833643137356532626232 Dec 12 17:25:54.223000 audit[4535]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:54.223000 audit[4535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffdfe74740 a2=0 a3=ffff8281bfa8 items=0 ppid=4305 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.223000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:54.241253 containerd[1693]: time="2025-12-12T17:25:54.241194869Z" level=info msg="connecting to shim 8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383" address="unix:///run/containerd/s/bf73d0bb4e7a4285fb3de0d5075be12620577f4206a674740b637383b219792c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:54.243252 containerd[1693]: time="2025-12-12T17:25:54.243221120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7dfdfddc-r69nw,Uid:482c1729-f8e9-4de5-b13e-dc7cf991c00d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9de8a02c9c06b0b59d83d175e2bb25b44e18abd105b74f68b48f25a12c5e0285\"" Dec 12 17:25:54.245993 containerd[1693]: time="2025-12-12T17:25:54.245428571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:25:54.273224 systemd[1]: Started cri-containerd-8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383.scope - libcontainer container 8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383. Dec 12 17:25:54.282000 audit: BPF prog-id=221 op=LOAD Dec 12 17:25:54.282000 audit: BPF prog-id=222 op=LOAD Dec 12 17:25:54.282000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4551 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864363432663339336133613365363030646463336332666435353737 Dec 12 17:25:54.282000 audit: BPF prog-id=222 op=UNLOAD Dec 12 17:25:54.282000 audit[4563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4551 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864363432663339336133613365363030646463336332666435353737 Dec 12 17:25:54.282000 audit: BPF prog-id=223 op=LOAD Dec 12 17:25:54.282000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4551 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864363432663339336133613365363030646463336332666435353737 Dec 12 17:25:54.282000 audit: BPF prog-id=224 op=LOAD Dec 12 17:25:54.282000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4551 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864363432663339336133613365363030646463336332666435353737 Dec 12 17:25:54.282000 audit: BPF prog-id=224 op=UNLOAD Dec 12 17:25:54.282000 audit[4563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4551 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864363432663339336133613365363030646463336332666435353737 Dec 12 17:25:54.282000 audit: BPF prog-id=223 op=UNLOAD Dec 12 17:25:54.282000 audit[4563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4551 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864363432663339336133613365363030646463336332666435353737 Dec 12 17:25:54.282000 audit: BPF prog-id=225 op=LOAD Dec 12 17:25:54.282000 audit[4563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4551 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:54.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864363432663339336133613365363030646463336332666435353737 Dec 12 17:25:54.302328 containerd[1693]: time="2025-12-12T17:25:54.302282820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nmqkr,Uid:38591688-bf7e-4006-99b5-49217c275f18,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d642f393a3a3e600ddc3c2fd55775e063404a096465a908c42738e1e2ea1383\"" Dec 12 17:25:54.800122 containerd[1693]: time="2025-12-12T17:25:54.799973109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:54.801849 containerd[1693]: time="2025-12-12T17:25:54.801794078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:25:54.801980 containerd[1693]: time="2025-12-12T17:25:54.801864278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:54.802204 kubelet[2914]: E1212 17:25:54.802139 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:25:54.802204 kubelet[2914]: E1212 17:25:54.802187 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:25:54.802854 kubelet[2914]: E1212 17:25:54.802735 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gntvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-r69nw_calico-apiserver(482c1729-f8e9-4de5-b13e-dc7cf991c00d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:54.803981 containerd[1693]: time="2025-12-12T17:25:54.803931329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:25:54.804046 kubelet[2914]: E1212 17:25:54.803929 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:25:54.951184 systemd-networkd[1605]: vxlan.calico: Gained IPv6LL Dec 12 17:25:55.321091 containerd[1693]: time="2025-12-12T17:25:55.320973516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:55.324110 containerd[1693]: time="2025-12-12T17:25:55.324075292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:25:55.324110 containerd[1693]: time="2025-12-12T17:25:55.324136692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:55.324369 kubelet[2914]: E1212 17:25:55.324329 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:25:55.324416 kubelet[2914]: E1212 17:25:55.324386 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:25:55.324547 kubelet[2914]: E1212 17:25:55.324507 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:55.326601 containerd[1693]: time="2025-12-12T17:25:55.326573464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:25:55.333421 kubelet[2914]: E1212 17:25:55.333375 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:25:55.335166 systemd-networkd[1605]: calif7e9e600616: Gained IPv6LL Dec 12 17:25:55.356000 audit[4591]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:55.360970 kernel: kauditd_printk_skb: 287 callbacks suppressed Dec 12 17:25:55.361039 kernel: audit: type=1325 audit(1765560355.356:687): table=filter:129 family=2 entries=20 op=nft_register_rule pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:55.356000 audit[4591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcd8fa7b0 a2=0 a3=1 items=0 ppid=3033 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:55.366439 kernel: audit: type=1300 audit(1765560355.356:687): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcd8fa7b0 a2=0 a3=1 items=0 ppid=3033 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:55.366498 kernel: audit: type=1327 audit(1765560355.356:687): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:55.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:55.370000 audit[4591]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:55.370000 audit[4591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcd8fa7b0 a2=0 a3=1 items=0 ppid=3033 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:55.377045 kernel: audit: type=1325 audit(1765560355.370:688): table=nat:130 family=2 entries=14 op=nft_register_rule pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:55.377145 kernel: audit: type=1300 audit(1765560355.370:688): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcd8fa7b0 a2=0 a3=1 items=0 ppid=3033 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:55.377193 kernel: audit: type=1327 audit(1765560355.370:688): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:55.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:55.677753 containerd[1693]: time="2025-12-12T17:25:55.677627768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:55.680845 containerd[1693]: time="2025-12-12T17:25:55.680633223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:25:55.680960 containerd[1693]: time="2025-12-12T17:25:55.680744424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:55.681459 kubelet[2914]: E1212 17:25:55.681384 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:25:55.682006 kubelet[2914]: E1212 17:25:55.681876 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:25:55.682514 kubelet[2914]: E1212 17:25:55.682369 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:55.683691 kubelet[2914]: E1212 17:25:55.683651 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:25:55.911070 systemd-networkd[1605]: cali3a64f0716fc: Gained IPv6LL Dec 12 17:25:55.975545 containerd[1693]: time="2025-12-12T17:25:55.975069679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pjdtp,Uid:9c61390d-c42e-46b3-8a0d-2bc07904de15,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:55.975545 containerd[1693]: time="2025-12-12T17:25:55.975115759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7dfdfddc-679rm,Uid:96a8850e-ec22-48ad-ae30-8b21eeca5a2c,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:25:55.975545 containerd[1693]: time="2025-12-12T17:25:55.975069719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4xpx4,Uid:936cef33-28fa-419e-841a-7a41dd1f707c,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:55.975545 containerd[1693]: time="2025-12-12T17:25:55.975069799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txwdj,Uid:5a0340c8-0c30-428c-a0be-59a60fca6418,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:56.126502 systemd-networkd[1605]: cali9587c80e140: Link UP Dec 12 17:25:56.128984 systemd-networkd[1605]: cali9587c80e140: Gained carrier Dec 12 17:25:56.142004 containerd[1693]: 2025-12-12 17:25:56.035 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0 coredns-674b8bbfcf- kube-system 936cef33-28fa-419e-841a-7a41dd1f707c 864 0 2025-12-12 17:25:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-4-1de611bfb5 coredns-674b8bbfcf-4xpx4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9587c80e140 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpx4" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-" Dec 12 17:25:56.142004 containerd[1693]: 2025-12-12 17:25:56.035 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpx4" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" Dec 12 17:25:56.142004 containerd[1693]: 2025-12-12 17:25:56.076 [INFO][4654] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" HandleID="k8s-pod-network.0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Workload="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" Dec 12 17:25:56.142181 containerd[1693]: 2025-12-12 17:25:56.077 [INFO][4654] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" HandleID="k8s-pod-network.0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Workload="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3700), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-4-1de611bfb5", "pod":"coredns-674b8bbfcf-4xpx4", "timestamp":"2025-12-12 17:25:56.076827956 +0000 UTC"}, Hostname:"ci-4515-1-0-4-1de611bfb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:25:56.142181 containerd[1693]: 2025-12-12 17:25:56.077 [INFO][4654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:25:56.142181 containerd[1693]: 2025-12-12 17:25:56.077 [INFO][4654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:25:56.142181 containerd[1693]: 2025-12-12 17:25:56.077 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-4-1de611bfb5' Dec 12 17:25:56.142181 containerd[1693]: 2025-12-12 17:25:56.089 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.142181 containerd[1693]: 2025-12-12 17:25:56.096 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.142181 containerd[1693]: 2025-12-12 17:25:56.103 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.142181 containerd[1693]: 2025-12-12 17:25:56.106 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.142181 containerd[1693]: 2025-12-12 17:25:56.108 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.142385 containerd[1693]: 2025-12-12 17:25:56.108 [INFO][4654] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.142385 containerd[1693]: 2025-12-12 17:25:56.110 [INFO][4654] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320 Dec 12 17:25:56.142385 containerd[1693]: 2025-12-12 17:25:56.113 [INFO][4654] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.142385 containerd[1693]: 2025-12-12 17:25:56.121 [INFO][4654] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.132/26] block=192.168.115.128/26 handle="k8s-pod-network.0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.142385 containerd[1693]: 2025-12-12 17:25:56.121 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.132/26] handle="k8s-pod-network.0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.142385 containerd[1693]: 2025-12-12 17:25:56.121 [INFO][4654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:25:56.142385 containerd[1693]: 2025-12-12 17:25:56.121 [INFO][4654] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.132/26] IPv6=[] ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" HandleID="k8s-pod-network.0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Workload="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" Dec 12 17:25:56.142524 containerd[1693]: 2025-12-12 17:25:56.123 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpx4" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"936cef33-28fa-419e-841a-7a41dd1f707c", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"", Pod:"coredns-674b8bbfcf-4xpx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9587c80e140", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:56.142524 containerd[1693]: 2025-12-12 17:25:56.123 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.132/32] ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpx4" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" Dec 12 17:25:56.142524 containerd[1693]: 2025-12-12 17:25:56.124 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9587c80e140 ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpx4" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" Dec 12 17:25:56.142524 containerd[1693]: 2025-12-12 17:25:56.127 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpx4" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" Dec 12 17:25:56.142524 containerd[1693]: 2025-12-12 17:25:56.128 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpx4" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"936cef33-28fa-419e-841a-7a41dd1f707c", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320", Pod:"coredns-674b8bbfcf-4xpx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9587c80e140", MAC:"2e:85:c7:76:f3:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:56.142524 containerd[1693]: 2025-12-12 17:25:56.139 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpx4" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--4xpx4-eth0" Dec 12 17:25:56.152000 audit[4700]: NETFILTER_CFG table=filter:131 family=2 entries=50 op=nft_register_chain pid=4700 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:56.152000 audit[4700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=ffffdf91a9a0 a2=0 a3=ffff99921fa8 items=0 ppid=4305 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.155948 kernel: audit: type=1325 audit(1765560356.152:689): table=filter:131 family=2 entries=50 op=nft_register_chain pid=4700 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:56.152000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:56.163211 kernel: audit: type=1300 audit(1765560356.152:689): arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=ffffdf91a9a0 a2=0 a3=ffff99921fa8 items=0 ppid=4305 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.163270 kernel: audit: type=1327 audit(1765560356.152:689): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:56.168276 containerd[1693]: time="2025-12-12T17:25:56.168236421Z" level=info msg="connecting to shim 0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320" address="unix:///run/containerd/s/c2a5fbcb1dcfaedf4589411db16eb94c4d28d4f62d92bb66516c269f8c905298" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:56.194145 systemd[1]: Started cri-containerd-0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320.scope - libcontainer container 0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320. Dec 12 17:25:56.206000 audit: BPF prog-id=226 op=LOAD Dec 12 17:25:56.207000 audit: BPF prog-id=227 op=LOAD Dec 12 17:25:56.207000 audit[4721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4710 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.208878 kernel: audit: type=1334 audit(1765560356.206:690): prog-id=226 op=LOAD Dec 12 17:25:56.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316431646565633935633863623738306236306565633039666666 Dec 12 17:25:56.207000 audit: BPF prog-id=227 op=UNLOAD Dec 12 17:25:56.207000 audit[4721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4710 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316431646565633935633863623738306236306565633039666666 Dec 12 17:25:56.207000 audit: BPF prog-id=228 op=LOAD Dec 12 17:25:56.207000 audit[4721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4710 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316431646565633935633863623738306236306565633039666666 Dec 12 17:25:56.207000 audit: BPF prog-id=229 op=LOAD Dec 12 17:25:56.207000 audit[4721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4710 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316431646565633935633863623738306236306565633039666666 Dec 12 17:25:56.207000 audit: BPF prog-id=229 op=UNLOAD Dec 12 17:25:56.207000 audit[4721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4710 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316431646565633935633863623738306236306565633039666666 Dec 12 17:25:56.207000 audit: BPF prog-id=228 op=UNLOAD Dec 12 17:25:56.207000 audit[4721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4710 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316431646565633935633863623738306236306565633039666666 Dec 12 17:25:56.207000 audit: BPF prog-id=230 op=LOAD Dec 12 17:25:56.207000 audit[4721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4710 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316431646565633935633863623738306236306565633039666666 Dec 12 17:25:56.234492 systemd-networkd[1605]: calib2949f7cc8c: Link UP Dec 12 17:25:56.235927 systemd-networkd[1605]: calib2949f7cc8c: Gained carrier Dec 12 17:25:56.245934 containerd[1693]: time="2025-12-12T17:25:56.245899175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4xpx4,Uid:936cef33-28fa-419e-841a-7a41dd1f707c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320\"" Dec 12 17:25:56.253673 containerd[1693]: time="2025-12-12T17:25:56.253617815Z" level=info msg="CreateContainer within sandbox \"0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.046 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0 coredns-674b8bbfcf- kube-system 5a0340c8-0c30-428c-a0be-59a60fca6418 865 0 2025-12-12 17:25:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-4-1de611bfb5 coredns-674b8bbfcf-txwdj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2949f7cc8c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-txwdj" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.046 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-txwdj" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.084 [INFO][4661] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" HandleID="k8s-pod-network.11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Workload="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.084 [INFO][4661] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" HandleID="k8s-pod-network.11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Workload="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c4b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-4-1de611bfb5", "pod":"coredns-674b8bbfcf-txwdj", "timestamp":"2025-12-12 17:25:56.084552316 +0000 UTC"}, Hostname:"ci-4515-1-0-4-1de611bfb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.084 [INFO][4661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.121 [INFO][4661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.121 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-4-1de611bfb5' Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.189 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.197 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.203 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.206 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.208 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.208 [INFO][4661] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.210 [INFO][4661] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4 Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.214 [INFO][4661] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.223 [INFO][4661] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.133/26] block=192.168.115.128/26 handle="k8s-pod-network.11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.223 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.133/26] handle="k8s-pod-network.11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.223 [INFO][4661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:25:56.255939 containerd[1693]: 2025-12-12 17:25:56.223 [INFO][4661] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.133/26] IPv6=[] ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" HandleID="k8s-pod-network.11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Workload="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" Dec 12 17:25:56.256425 containerd[1693]: 2025-12-12 17:25:56.228 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-txwdj" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a0340c8-0c30-428c-a0be-59a60fca6418", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"", Pod:"coredns-674b8bbfcf-txwdj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2949f7cc8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:56.256425 containerd[1693]: 2025-12-12 17:25:56.229 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.133/32] ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-txwdj" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" Dec 12 17:25:56.256425 containerd[1693]: 2025-12-12 17:25:56.229 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2949f7cc8c ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-txwdj" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" Dec 12 17:25:56.256425 containerd[1693]: 2025-12-12 17:25:56.237 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-txwdj" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" Dec 12 17:25:56.256425 containerd[1693]: 2025-12-12 17:25:56.238 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-txwdj" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a0340c8-0c30-428c-a0be-59a60fca6418", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4", Pod:"coredns-674b8bbfcf-txwdj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2949f7cc8c", MAC:"6a:31:f3:08:dc:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:56.256425 containerd[1693]: 2025-12-12 17:25:56.252 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-txwdj" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-coredns--674b8bbfcf--txwdj-eth0" Dec 12 17:25:56.272000 audit[4756]: NETFILTER_CFG table=filter:132 family=2 entries=44 op=nft_register_chain pid=4756 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:56.272000 audit[4756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21532 a0=3 a1=fffff74c5170 a2=0 a3=ffff8500efa8 items=0 ppid=4305 pid=4756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.272000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:56.274334 containerd[1693]: time="2025-12-12T17:25:56.273339955Z" level=info msg="Container dba4e1178599ab524019f8bae6f6011b330c0e56a357a29445c1e4efe5077b97: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:56.282336 containerd[1693]: time="2025-12-12T17:25:56.282280520Z" level=info msg="CreateContainer within sandbox \"0c1d1deec95c8cb780b60eec09fff1118f9224e1b5d1d6d990f9df80a69ec320\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dba4e1178599ab524019f8bae6f6011b330c0e56a357a29445c1e4efe5077b97\"" Dec 12 17:25:56.283880 containerd[1693]: time="2025-12-12T17:25:56.283532287Z" level=info msg="StartContainer for \"dba4e1178599ab524019f8bae6f6011b330c0e56a357a29445c1e4efe5077b97\"" Dec 12 17:25:56.285338 containerd[1693]: time="2025-12-12T17:25:56.285307616Z" level=info msg="connecting to shim dba4e1178599ab524019f8bae6f6011b330c0e56a357a29445c1e4efe5077b97" address="unix:///run/containerd/s/c2a5fbcb1dcfaedf4589411db16eb94c4d28d4f62d92bb66516c269f8c905298" protocol=ttrpc version=3 Dec 12 17:25:56.294928 containerd[1693]: time="2025-12-12T17:25:56.294883144Z" level=info msg="connecting to shim 11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4" address="unix:///run/containerd/s/7ad97425b88cfd421a1d75d2256bfc249cc661d15bb7d42eeb11f1749868440f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:56.307214 systemd[1]: Started cri-containerd-dba4e1178599ab524019f8bae6f6011b330c0e56a357a29445c1e4efe5077b97.scope - libcontainer container dba4e1178599ab524019f8bae6f6011b330c0e56a357a29445c1e4efe5077b97. Dec 12 17:25:56.320067 systemd[1]: Started cri-containerd-11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4.scope - libcontainer container 11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4. Dec 12 17:25:56.326000 audit: BPF prog-id=231 op=LOAD Dec 12 17:25:56.327000 audit: BPF prog-id=232 op=LOAD Dec 12 17:25:56.327000 audit[4758]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4710 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462613465313137383539396162353234303139663862616536663630 Dec 12 17:25:56.327000 audit: BPF prog-id=232 op=UNLOAD Dec 12 17:25:56.327000 audit[4758]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4710 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462613465313137383539396162353234303139663862616536663630 Dec 12 17:25:56.327000 audit: BPF prog-id=233 op=LOAD Dec 12 17:25:56.327000 audit[4758]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4710 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462613465313137383539396162353234303139663862616536663630 Dec 12 17:25:56.328000 audit: BPF prog-id=234 op=LOAD Dec 12 17:25:56.328000 audit[4758]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4710 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462613465313137383539396162353234303139663862616536663630 Dec 12 17:25:56.328000 audit: BPF prog-id=234 op=UNLOAD Dec 12 17:25:56.328000 audit[4758]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4710 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462613465313137383539396162353234303139663862616536663630 Dec 12 17:25:56.328000 audit: BPF prog-id=233 op=UNLOAD Dec 12 17:25:56.328000 audit[4758]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4710 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462613465313137383539396162353234303139663862616536663630 Dec 12 17:25:56.328000 audit: BPF prog-id=235 op=LOAD Dec 12 17:25:56.328000 audit[4758]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4710 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462613465313137383539396162353234303139663862616536663630 Dec 12 17:25:56.343256 kubelet[2914]: E1212 17:25:56.341519 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:25:56.344967 systemd-networkd[1605]: cali429eb36160b: Link UP Dec 12 17:25:56.345767 systemd-networkd[1605]: cali429eb36160b: Gained carrier Dec 12 17:25:56.350000 audit: BPF prog-id=236 op=LOAD Dec 12 17:25:56.355000 audit: BPF prog-id=237 op=LOAD Dec 12 17:25:56.355000 audit[4789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4771 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131633763346337653930643530653434366532363238613334383330 Dec 12 17:25:56.356000 audit: BPF prog-id=237 op=UNLOAD Dec 12 17:25:56.356000 audit[4789]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4771 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131633763346337653930643530653434366532363238613334383330 Dec 12 17:25:56.356000 audit: BPF prog-id=238 op=LOAD Dec 12 17:25:56.356000 audit[4789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4771 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131633763346337653930643530653434366532363238613334383330 Dec 12 17:25:56.357000 audit: BPF prog-id=239 op=LOAD Dec 12 17:25:56.357000 audit[4789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4771 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131633763346337653930643530653434366532363238613334383330 Dec 12 17:25:56.357000 audit: BPF prog-id=239 op=UNLOAD Dec 12 17:25:56.357000 audit[4789]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4771 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131633763346337653930643530653434366532363238613334383330 Dec 12 17:25:56.357000 audit: BPF prog-id=238 op=UNLOAD Dec 12 17:25:56.357000 audit[4789]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4771 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131633763346337653930643530653434366532363238613334383330 Dec 12 17:25:56.357000 audit: BPF prog-id=240 op=LOAD Dec 12 17:25:56.357000 audit[4789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4771 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131633763346337653930643530653434366532363238613334383330 Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.065 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0 goldmane-666569f655- calico-system 9c61390d-c42e-46b3-8a0d-2bc07904de15 866 0 2025-12-12 17:25:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515-1-0-4-1de611bfb5 goldmane-666569f655-pjdtp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali429eb36160b [] [] }} ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Namespace="calico-system" Pod="goldmane-666569f655-pjdtp" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.065 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Namespace="calico-system" Pod="goldmane-666569f655-pjdtp" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.099 [INFO][4670] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" HandleID="k8s-pod-network.bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Workload="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.099 [INFO][4670] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" HandleID="k8s-pod-network.bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Workload="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400053d9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-4-1de611bfb5", "pod":"goldmane-666569f655-pjdtp", "timestamp":"2025-12-12 17:25:56.099737433 +0000 UTC"}, Hostname:"ci-4515-1-0-4-1de611bfb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.100 [INFO][4670] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.223 [INFO][4670] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.223 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-4-1de611bfb5' Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.289 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.298 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.305 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.309 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.314 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.314 [INFO][4670] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.316 [INFO][4670] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002 Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.323 [INFO][4670] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.333 [INFO][4670] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.134/26] block=192.168.115.128/26 handle="k8s-pod-network.bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.333 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.134/26] handle="k8s-pod-network.bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.333 [INFO][4670] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:25:56.372696 containerd[1693]: 2025-12-12 17:25:56.333 [INFO][4670] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.134/26] IPv6=[] ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" HandleID="k8s-pod-network.bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Workload="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" Dec 12 17:25:56.376007 containerd[1693]: 2025-12-12 17:25:56.338 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Namespace="calico-system" Pod="goldmane-666569f655-pjdtp" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9c61390d-c42e-46b3-8a0d-2bc07904de15", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"", Pod:"goldmane-666569f655-pjdtp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali429eb36160b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:56.376007 containerd[1693]: 2025-12-12 17:25:56.339 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.134/32] ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Namespace="calico-system" Pod="goldmane-666569f655-pjdtp" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" Dec 12 17:25:56.376007 containerd[1693]: 2025-12-12 17:25:56.339 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali429eb36160b ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Namespace="calico-system" Pod="goldmane-666569f655-pjdtp" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" Dec 12 17:25:56.376007 containerd[1693]: 2025-12-12 17:25:56.349 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Namespace="calico-system" Pod="goldmane-666569f655-pjdtp" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" Dec 12 17:25:56.376007 containerd[1693]: 2025-12-12 17:25:56.353 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Namespace="calico-system" Pod="goldmane-666569f655-pjdtp" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9c61390d-c42e-46b3-8a0d-2bc07904de15", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002", Pod:"goldmane-666569f655-pjdtp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali429eb36160b", MAC:"0e:2f:69:85:a2:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:56.376007 containerd[1693]: 2025-12-12 17:25:56.369 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" Namespace="calico-system" Pod="goldmane-666569f655-pjdtp" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-goldmane--666569f655--pjdtp-eth0" Dec 12 17:25:56.379369 containerd[1693]: time="2025-12-12T17:25:56.379313653Z" level=info msg="StartContainer for \"dba4e1178599ab524019f8bae6f6011b330c0e56a357a29445c1e4efe5077b97\" returns successfully" Dec 12 17:25:56.398000 audit[4840]: NETFILTER_CFG table=filter:133 family=2 entries=60 op=nft_register_chain pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:56.398000 audit[4840]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29932 a0=3 a1=ffffc6c33760 a2=0 a3=ffff9131dfa8 items=0 ppid=4305 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.398000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:56.406190 containerd[1693]: time="2025-12-12T17:25:56.406147550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txwdj,Uid:5a0340c8-0c30-428c-a0be-59a60fca6418,Namespace:kube-system,Attempt:0,} returns sandbox id \"11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4\"" Dec 12 17:25:56.408268 containerd[1693]: time="2025-12-12T17:25:56.408229600Z" level=info msg="connecting to shim bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002" address="unix:///run/containerd/s/827aaf43b7e7e6e0a7d79943dccdd759815057e1b9f51e8b6d93815b64ddee42" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:56.416147 containerd[1693]: time="2025-12-12T17:25:56.416074840Z" level=info msg="CreateContainer within sandbox \"11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:25:56.433696 containerd[1693]: time="2025-12-12T17:25:56.433546529Z" level=info msg="Container 86d651e4aa46f381df464e0f26ab215cc3394c1fc6f7bf3fb1cec703ab37a713: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:56.447736 containerd[1693]: time="2025-12-12T17:25:56.447061277Z" level=info msg="CreateContainer within sandbox \"11c7c4c7e90d50e446e2628a348309ef586d98ad9b4537e05783ba31285adac4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86d651e4aa46f381df464e0f26ab215cc3394c1fc6f7bf3fb1cec703ab37a713\"" Dec 12 17:25:56.449448 containerd[1693]: time="2025-12-12T17:25:56.449370889Z" level=info msg="StartContainer for \"86d651e4aa46f381df464e0f26ab215cc3394c1fc6f7bf3fb1cec703ab37a713\"" Dec 12 17:25:56.456103 systemd[1]: Started cri-containerd-bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002.scope - libcontainer container bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002. Dec 12 17:25:56.456470 containerd[1693]: time="2025-12-12T17:25:56.455254959Z" level=info msg="connecting to shim 86d651e4aa46f381df464e0f26ab215cc3394c1fc6f7bf3fb1cec703ab37a713" address="unix:///run/containerd/s/7ad97425b88cfd421a1d75d2256bfc249cc661d15bb7d42eeb11f1749868440f" protocol=ttrpc version=3 Dec 12 17:25:56.465601 systemd-networkd[1605]: cali90906d4967b: Link UP Dec 12 17:25:56.467908 systemd-networkd[1605]: cali90906d4967b: Gained carrier Dec 12 17:25:56.479000 audit: BPF prog-id=241 op=LOAD Dec 12 17:25:56.479000 audit: BPF prog-id=242 op=LOAD Dec 12 17:25:56.479000 audit[4862]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373365316530313130663030303633613961663936396435383663 Dec 12 17:25:56.480000 audit: BPF prog-id=242 op=UNLOAD Dec 12 17:25:56.480000 audit[4862]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373365316530313130663030303633613961663936396435383663 Dec 12 17:25:56.480000 audit: BPF prog-id=243 op=LOAD Dec 12 17:25:56.480000 audit[4862]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373365316530313130663030303633613961663936396435383663 Dec 12 17:25:56.481000 audit: BPF prog-id=244 op=LOAD Dec 12 17:25:56.481000 audit[4862]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373365316530313130663030303633613961663936396435383663 Dec 12 17:25:56.481000 audit: BPF prog-id=244 op=UNLOAD Dec 12 17:25:56.481000 audit[4862]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373365316530313130663030303633613961663936396435383663 Dec 12 17:25:56.481000 audit: BPF prog-id=243 op=UNLOAD Dec 12 17:25:56.481000 audit[4862]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373365316530313130663030303633613961663936396435383663 Dec 12 17:25:56.481000 audit: BPF prog-id=245 op=LOAD Dec 12 17:25:56.481000 audit[4862]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373365316530313130663030303633613961663936396435383663 Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.067 [INFO][4616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0 calico-apiserver-5b7dfdfddc- calico-apiserver 96a8850e-ec22-48ad-ae30-8b21eeca5a2c 868 0 2025-12-12 17:25:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b7dfdfddc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-4-1de611bfb5 calico-apiserver-5b7dfdfddc-679rm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90906d4967b [] [] }} ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-679rm" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.067 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-679rm" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.106 [INFO][4677] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" HandleID="k8s-pod-network.bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Workload="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.106 [INFO][4677] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" HandleID="k8s-pod-network.bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Workload="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137500), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-4-1de611bfb5", "pod":"calico-apiserver-5b7dfdfddc-679rm", "timestamp":"2025-12-12 17:25:56.106746668 +0000 UTC"}, Hostname:"ci-4515-1-0-4-1de611bfb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.107 [INFO][4677] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.333 [INFO][4677] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.333 [INFO][4677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-4-1de611bfb5' Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.390 [INFO][4677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.400 [INFO][4677] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.418 [INFO][4677] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.422 [INFO][4677] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.427 [INFO][4677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.427 [INFO][4677] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.430 [INFO][4677] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57 Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.436 [INFO][4677] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.449 [INFO][4677] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.135/26] block=192.168.115.128/26 handle="k8s-pod-network.bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.450 [INFO][4677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.135/26] handle="k8s-pod-network.bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.451 [INFO][4677] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:25:56.495527 containerd[1693]: 2025-12-12 17:25:56.451 [INFO][4677] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.135/26] IPv6=[] ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" HandleID="k8s-pod-network.bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Workload="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" Dec 12 17:25:56.496759 containerd[1693]: 2025-12-12 17:25:56.456 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-679rm" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0", GenerateName:"calico-apiserver-5b7dfdfddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a8850e-ec22-48ad-ae30-8b21eeca5a2c", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7dfdfddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"", Pod:"calico-apiserver-5b7dfdfddc-679rm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90906d4967b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:56.496759 containerd[1693]: 2025-12-12 17:25:56.456 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.135/32] ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-679rm" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" Dec 12 17:25:56.496759 containerd[1693]: 2025-12-12 17:25:56.456 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90906d4967b ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-679rm" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" Dec 12 17:25:56.496759 containerd[1693]: 2025-12-12 17:25:56.470 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-679rm" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" Dec 12 17:25:56.496759 containerd[1693]: 2025-12-12 17:25:56.472 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-679rm" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0", GenerateName:"calico-apiserver-5b7dfdfddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a8850e-ec22-48ad-ae30-8b21eeca5a2c", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7dfdfddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57", Pod:"calico-apiserver-5b7dfdfddc-679rm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90906d4967b", MAC:"62:52:6b:9a:f3:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:56.496759 containerd[1693]: 2025-12-12 17:25:56.491 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" Namespace="calico-apiserver" Pod="calico-apiserver-5b7dfdfddc-679rm" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--apiserver--5b7dfdfddc--679rm-eth0" Dec 12 17:25:56.499102 systemd[1]: Started cri-containerd-86d651e4aa46f381df464e0f26ab215cc3394c1fc6f7bf3fb1cec703ab37a713.scope - libcontainer container 86d651e4aa46f381df464e0f26ab215cc3394c1fc6f7bf3fb1cec703ab37a713. Dec 12 17:25:56.525000 audit: BPF prog-id=246 op=LOAD Dec 12 17:25:56.525000 audit: BPF prog-id=247 op=LOAD Dec 12 17:25:56.525000 audit[4878]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4771 pid=4878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643635316534616134366633383164663436346530663236616232 Dec 12 17:25:56.525000 audit: BPF prog-id=247 op=UNLOAD Dec 12 17:25:56.525000 audit[4878]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4771 pid=4878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643635316534616134366633383164663436346530663236616232 Dec 12 17:25:56.525000 audit: BPF prog-id=248 op=LOAD Dec 12 17:25:56.525000 audit[4878]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4771 pid=4878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643635316534616134366633383164663436346530663236616232 Dec 12 17:25:56.525000 audit: BPF prog-id=249 op=LOAD Dec 12 17:25:56.525000 audit[4878]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4771 pid=4878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643635316534616134366633383164663436346530663236616232 Dec 12 17:25:56.525000 audit: BPF prog-id=249 op=UNLOAD Dec 12 17:25:56.525000 audit[4878]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4771 pid=4878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643635316534616134366633383164663436346530663236616232 Dec 12 17:25:56.525000 audit: BPF prog-id=248 op=UNLOAD Dec 12 17:25:56.525000 audit[4878]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4771 pid=4878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643635316534616134366633383164663436346530663236616232 Dec 12 17:25:56.525000 audit: BPF prog-id=250 op=LOAD Dec 12 17:25:56.525000 audit[4878]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4771 pid=4878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643635316534616134366633383164663436346530663236616232 Dec 12 17:25:56.527000 audit[4921]: NETFILTER_CFG table=filter:134 family=2 entries=63 op=nft_register_chain pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:56.527000 audit[4921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30680 a0=3 a1=ffffc377ab50 a2=0 a3=ffffa1fabfa8 items=0 ppid=4305 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.527000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:56.532525 containerd[1693]: time="2025-12-12T17:25:56.532389551Z" level=info msg="connecting to shim bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57" address="unix:///run/containerd/s/ee5a6122a3f19c9d918ee10e78dad17edc2b0b3e618f168d5410ef106a47bbcf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:56.539210 containerd[1693]: time="2025-12-12T17:25:56.539166945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pjdtp,Uid:9c61390d-c42e-46b3-8a0d-2bc07904de15,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd73e1e0110f00063a9af969d586c7be099972850bd8330a8ba28945f9626002\"" Dec 12 17:25:56.541759 containerd[1693]: time="2025-12-12T17:25:56.541723278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:25:56.575059 systemd[1]: Started cri-containerd-bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57.scope - libcontainer container bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57. Dec 12 17:25:56.575956 containerd[1693]: time="2025-12-12T17:25:56.575445170Z" level=info msg="StartContainer for \"86d651e4aa46f381df464e0f26ab215cc3394c1fc6f7bf3fb1cec703ab37a713\" returns successfully" Dec 12 17:25:56.598000 audit: BPF prog-id=251 op=LOAD Dec 12 17:25:56.598000 audit: BPF prog-id=252 op=LOAD Dec 12 17:25:56.598000 audit[4939]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4926 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313134313364653166323635646361616433653234646262623633 Dec 12 17:25:56.599000 audit: BPF prog-id=252 op=UNLOAD Dec 12 17:25:56.599000 audit[4939]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4926 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313134313364653166323635646361616433653234646262623633 Dec 12 17:25:56.599000 audit: BPF prog-id=253 op=LOAD Dec 12 17:25:56.599000 audit[4939]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4926 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313134313364653166323635646361616433653234646262623633 Dec 12 17:25:56.599000 audit: BPF prog-id=254 op=LOAD Dec 12 17:25:56.599000 audit[4939]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4926 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313134313364653166323635646361616433653234646262623633 Dec 12 17:25:56.599000 audit: BPF prog-id=254 op=UNLOAD Dec 12 17:25:56.599000 audit[4939]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4926 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313134313364653166323635646361616433653234646262623633 Dec 12 17:25:56.599000 audit: BPF prog-id=253 op=UNLOAD Dec 12 17:25:56.599000 audit[4939]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4926 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313134313364653166323635646361616433653234646262623633 Dec 12 17:25:56.599000 audit: BPF prog-id=255 op=LOAD Dec 12 17:25:56.599000 audit[4939]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4926 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:56.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313134313364653166323635646361616433653234646262623633 Dec 12 17:25:56.628066 containerd[1693]: time="2025-12-12T17:25:56.628022637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7dfdfddc-679rm,Uid:96a8850e-ec22-48ad-ae30-8b21eeca5a2c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bd11413de1f265dcaad3e24dbbb63f9fda828b83b384ac8a0bba9c4bc48b9a57\"" Dec 12 17:25:56.895379 containerd[1693]: time="2025-12-12T17:25:56.895332595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:56.896713 containerd[1693]: time="2025-12-12T17:25:56.896679922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:25:56.896827 containerd[1693]: time="2025-12-12T17:25:56.896691802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:56.897009 kubelet[2914]: E1212 17:25:56.896968 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:25:56.897150 kubelet[2914]: E1212 17:25:56.897015 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:25:56.897573 kubelet[2914]: E1212 17:25:56.897255 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsx5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pjdtp_calico-system(9c61390d-c42e-46b3-8a0d-2bc07904de15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:56.897764 containerd[1693]: time="2025-12-12T17:25:56.897277765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:25:56.898575 kubelet[2914]: E1212 17:25:56.898547 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:25:56.976028 containerd[1693]: time="2025-12-12T17:25:56.975988525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb58f788f-rt28s,Uid:4f0bba68-28b8-4297-9bde-8061a24d85f6,Namespace:calico-system,Attempt:0,}" Dec 12 17:25:57.153974 systemd-networkd[1605]: cali16b136df5fc: Link UP Dec 12 17:25:57.154470 systemd-networkd[1605]: cali16b136df5fc: Gained carrier Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.016 [INFO][4983] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0 calico-kube-controllers-6bb58f788f- calico-system 4f0bba68-28b8-4297-9bde-8061a24d85f6 869 0 2025-12-12 17:25:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bb58f788f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515-1-0-4-1de611bfb5 calico-kube-controllers-6bb58f788f-rt28s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali16b136df5fc [] [] }} ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Namespace="calico-system" Pod="calico-kube-controllers-6bb58f788f-rt28s" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.016 [INFO][4983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Namespace="calico-system" Pod="calico-kube-controllers-6bb58f788f-rt28s" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.038 [INFO][4992] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" HandleID="k8s-pod-network.49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Workload="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.038 [INFO][4992] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" HandleID="k8s-pod-network.49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Workload="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034a240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-4-1de611bfb5", "pod":"calico-kube-controllers-6bb58f788f-rt28s", "timestamp":"2025-12-12 17:25:57.038534363 +0000 UTC"}, Hostname:"ci-4515-1-0-4-1de611bfb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.038 [INFO][4992] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.038 [INFO][4992] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.038 [INFO][4992] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-4-1de611bfb5' Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.048 [INFO][4992] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.052 [INFO][4992] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.056 [INFO][4992] ipam/ipam.go 511: Trying affinity for 192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.058 [INFO][4992] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.060 [INFO][4992] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.060 [INFO][4992] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.061 [INFO][4992] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82 Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.134 [INFO][4992] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.149 [INFO][4992] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.136/26] block=192.168.115.128/26 handle="k8s-pod-network.49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.149 [INFO][4992] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.136/26] handle="k8s-pod-network.49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" host="ci-4515-1-0-4-1de611bfb5" Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.149 [INFO][4992] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:25:57.167429 containerd[1693]: 2025-12-12 17:25:57.149 [INFO][4992] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.136/26] IPv6=[] ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" HandleID="k8s-pod-network.49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Workload="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" Dec 12 17:25:57.168872 containerd[1693]: 2025-12-12 17:25:57.151 [INFO][4983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Namespace="calico-system" Pod="calico-kube-controllers-6bb58f788f-rt28s" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0", GenerateName:"calico-kube-controllers-6bb58f788f-", Namespace:"calico-system", SelfLink:"", UID:"4f0bba68-28b8-4297-9bde-8061a24d85f6", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb58f788f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"", Pod:"calico-kube-controllers-6bb58f788f-rt28s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16b136df5fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:57.168872 containerd[1693]: 2025-12-12 17:25:57.152 [INFO][4983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.136/32] ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Namespace="calico-system" Pod="calico-kube-controllers-6bb58f788f-rt28s" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" Dec 12 17:25:57.168872 containerd[1693]: 2025-12-12 17:25:57.152 [INFO][4983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16b136df5fc ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Namespace="calico-system" Pod="calico-kube-controllers-6bb58f788f-rt28s" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" Dec 12 17:25:57.168872 containerd[1693]: 2025-12-12 17:25:57.154 [INFO][4983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Namespace="calico-system" Pod="calico-kube-controllers-6bb58f788f-rt28s" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" Dec 12 17:25:57.168872 containerd[1693]: 2025-12-12 17:25:57.156 [INFO][4983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Namespace="calico-system" Pod="calico-kube-controllers-6bb58f788f-rt28s" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0", GenerateName:"calico-kube-controllers-6bb58f788f-", Namespace:"calico-system", SelfLink:"", UID:"4f0bba68-28b8-4297-9bde-8061a24d85f6", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb58f788f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-4-1de611bfb5", ContainerID:"49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82", Pod:"calico-kube-controllers-6bb58f788f-rt28s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16b136df5fc", MAC:"8a:b3:82:00:2d:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:25:57.168872 containerd[1693]: 2025-12-12 17:25:57.165 [INFO][4983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" Namespace="calico-system" Pod="calico-kube-controllers-6bb58f788f-rt28s" WorkloadEndpoint="ci--4515--1--0--4--1de611bfb5-k8s-calico--kube--controllers--6bb58f788f--rt28s-eth0" Dec 12 17:25:57.178000 audit[5009]: NETFILTER_CFG table=filter:135 family=2 entries=56 op=nft_register_chain pid=5009 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:25:57.178000 audit[5009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25500 a0=3 a1=ffffde90b030 a2=0 a3=ffff910f3fa8 items=0 ppid=4305 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.178000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:25:57.194418 containerd[1693]: time="2025-12-12T17:25:57.194374115Z" level=info msg="connecting to shim 49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82" address="unix:///run/containerd/s/5507179bb3b8105b1a48c2530619909f2ad3b40cc6a6fc7ad7580136743d19db" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:57.222045 systemd[1]: Started cri-containerd-49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82.scope - libcontainer container 49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82. Dec 12 17:25:57.232000 audit: BPF prog-id=256 op=LOAD Dec 12 17:25:57.233000 audit: BPF prog-id=257 op=LOAD Dec 12 17:25:57.233000 audit[5029]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5019 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643265353663386239663466303866303563616461646638386231 Dec 12 17:25:57.233000 audit: BPF prog-id=257 op=UNLOAD Dec 12 17:25:57.233000 audit[5029]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5019 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643265353663386239663466303866303563616461646638386231 Dec 12 17:25:57.233000 audit: BPF prog-id=258 op=LOAD Dec 12 17:25:57.233000 audit[5029]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5019 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643265353663386239663466303866303563616461646638386231 Dec 12 17:25:57.233000 audit: BPF prog-id=259 op=LOAD Dec 12 17:25:57.233000 audit[5029]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5019 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643265353663386239663466303866303563616461646638386231 Dec 12 17:25:57.233000 audit: BPF prog-id=259 op=UNLOAD Dec 12 17:25:57.233000 audit[5029]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5019 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643265353663386239663466303866303563616461646638386231 Dec 12 17:25:57.233000 audit: BPF prog-id=258 op=UNLOAD Dec 12 17:25:57.233000 audit[5029]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5019 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643265353663386239663466303866303563616461646638386231 Dec 12 17:25:57.233000 audit: BPF prog-id=260 op=LOAD Dec 12 17:25:57.233000 audit[5029]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5019 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643265353663386239663466303866303563616461646638386231 Dec 12 17:25:57.248435 containerd[1693]: time="2025-12-12T17:25:57.248393869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:57.249513 containerd[1693]: time="2025-12-12T17:25:57.249475515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:25:57.249928 containerd[1693]: time="2025-12-12T17:25:57.249556835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:57.249971 kubelet[2914]: E1212 17:25:57.249661 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:25:57.249971 kubelet[2914]: E1212 17:25:57.249705 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:25:57.250096 kubelet[2914]: E1212 17:25:57.249822 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htxpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-679rm_calico-apiserver(96a8850e-ec22-48ad-ae30-8b21eeca5a2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:57.251353 kubelet[2914]: E1212 17:25:57.251312 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:25:57.263471 containerd[1693]: time="2025-12-12T17:25:57.263428945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb58f788f-rt28s,Uid:4f0bba68-28b8-4297-9bde-8061a24d85f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"49d2e56c8b9f4f08f05cadadf88b19255e3ff7ebe57595d35ea433b385eafd82\"" Dec 12 17:25:57.265204 containerd[1693]: time="2025-12-12T17:25:57.265172714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:25:57.341537 kubelet[2914]: E1212 17:25:57.341494 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:25:57.345734 kubelet[2914]: E1212 17:25:57.345669 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:25:57.367000 audit[5060]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:57.367000 audit[5060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe66ffc20 a2=0 a3=1 items=0 ppid=3033 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:57.375000 audit[5060]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:57.375000 audit[5060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe66ffc20 a2=0 a3=1 items=0 ppid=3033 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:57.383393 systemd-networkd[1605]: cali429eb36160b: Gained IPv6LL Dec 12 17:25:57.402867 kubelet[2914]: I1212 17:25:57.401765 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4xpx4" podStartSLOduration=46.401746368 podStartE2EDuration="46.401746368s" podCreationTimestamp="2025-12-12 17:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:57.401187045 +0000 UTC m=+50.510040211" watchObservedRunningTime="2025-12-12 17:25:57.401746368 +0000 UTC m=+50.510599534" Dec 12 17:25:57.405000 audit[5062]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=5062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:57.405000 audit[5062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcfa17180 a2=0 a3=1 items=0 ppid=3033 pid=5062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:57.410000 audit[5062]: NETFILTER_CFG table=nat:139 family=2 entries=14 op=nft_register_rule pid=5062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:57.410000 audit[5062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcfa17180 a2=0 a3=1 items=0 ppid=3033 pid=5062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:57.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:57.589411 containerd[1693]: time="2025-12-12T17:25:57.589306401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:57.591018 containerd[1693]: time="2025-12-12T17:25:57.590969850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:25:57.591072 containerd[1693]: time="2025-12-12T17:25:57.591014170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:57.591210 kubelet[2914]: E1212 17:25:57.591175 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:25:57.591269 kubelet[2914]: E1212 17:25:57.591225 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:25:57.591777 kubelet[2914]: E1212 17:25:57.591390 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49bl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bb58f788f-rt28s_calico-system(4f0bba68-28b8-4297-9bde-8061a24d85f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:57.592645 kubelet[2914]: E1212 17:25:57.592613 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:25:57.639119 systemd-networkd[1605]: cali9587c80e140: Gained IPv6LL Dec 12 17:25:57.767008 systemd-networkd[1605]: cali90906d4967b: Gained IPv6LL Dec 12 17:25:57.895097 systemd-networkd[1605]: calib2949f7cc8c: Gained IPv6LL Dec 12 17:25:58.351361 kubelet[2914]: E1212 17:25:58.351030 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:25:58.352028 kubelet[2914]: E1212 17:25:58.351987 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:25:58.352579 kubelet[2914]: E1212 17:25:58.352545 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:25:58.362813 kubelet[2914]: I1212 17:25:58.362209 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-txwdj" podStartSLOduration=47.362190928 podStartE2EDuration="47.362190928s" podCreationTimestamp="2025-12-12 17:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:57.416767845 +0000 UTC m=+50.525620971" watchObservedRunningTime="2025-12-12 17:25:58.362190928 +0000 UTC m=+51.471044094" Dec 12 17:25:58.424000 audit[5064]: NETFILTER_CFG table=filter:140 family=2 entries=17 op=nft_register_rule pid=5064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:58.424000 audit[5064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe8aa0f20 a2=0 a3=1 items=0 ppid=3033 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:58.424000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:58.436000 audit[5064]: NETFILTER_CFG table=nat:141 family=2 entries=47 op=nft_register_chain pid=5064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:25:58.436000 audit[5064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe8aa0f20 a2=0 a3=1 items=0 ppid=3033 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:58.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:25:58.791127 systemd-networkd[1605]: cali16b136df5fc: Gained IPv6LL Dec 12 17:26:06.976567 containerd[1693]: time="2025-12-12T17:26:06.976509378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:26:07.320058 containerd[1693]: time="2025-12-12T17:26:07.319953603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:07.321609 containerd[1693]: time="2025-12-12T17:26:07.321547371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:26:07.321704 containerd[1693]: time="2025-12-12T17:26:07.321639731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:07.321814 kubelet[2914]: E1212 17:26:07.321769 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:26:07.322117 kubelet[2914]: E1212 17:26:07.321823 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:26:07.322117 kubelet[2914]: E1212 17:26:07.321960 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2ec095a6f82141b6ae474647ccc0bac7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:07.324282 containerd[1693]: time="2025-12-12T17:26:07.324247385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:26:07.694488 containerd[1693]: time="2025-12-12T17:26:07.694358665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:07.696557 containerd[1693]: time="2025-12-12T17:26:07.696483436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:26:07.696649 containerd[1693]: time="2025-12-12T17:26:07.696581156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:07.696813 kubelet[2914]: E1212 17:26:07.696748 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:26:07.696813 kubelet[2914]: E1212 17:26:07.696800 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:26:07.697150 kubelet[2914]: E1212 17:26:07.697098 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:07.698376 kubelet[2914]: E1212 17:26:07.698332 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:26:08.976123 containerd[1693]: time="2025-12-12T17:26:08.975977817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:26:09.309963 containerd[1693]: time="2025-12-12T17:26:09.309669752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:09.311623 containerd[1693]: time="2025-12-12T17:26:09.311585882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:26:09.311783 containerd[1693]: time="2025-12-12T17:26:09.311663923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:09.311821 kubelet[2914]: E1212 17:26:09.311774 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:26:09.311821 kubelet[2914]: E1212 17:26:09.311818 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:26:09.312120 kubelet[2914]: E1212 17:26:09.311993 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsx5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pjdtp_calico-system(9c61390d-c42e-46b3-8a0d-2bc07904de15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:09.313933 kubelet[2914]: E1212 17:26:09.313229 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:26:09.976126 containerd[1693]: time="2025-12-12T17:26:09.975985298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:26:10.311986 containerd[1693]: time="2025-12-12T17:26:10.311786644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:10.313839 containerd[1693]: time="2025-12-12T17:26:10.313738214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:26:10.313913 containerd[1693]: time="2025-12-12T17:26:10.313843735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:10.314166 kubelet[2914]: E1212 17:26:10.314101 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:26:10.314448 kubelet[2914]: E1212 17:26:10.314167 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:26:10.314448 kubelet[2914]: E1212 17:26:10.314389 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:10.314795 containerd[1693]: time="2025-12-12T17:26:10.314769939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:26:10.665054 containerd[1693]: time="2025-12-12T17:26:10.664817158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:10.666540 containerd[1693]: time="2025-12-12T17:26:10.666504287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:26:10.666622 containerd[1693]: time="2025-12-12T17:26:10.666527527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:10.666772 kubelet[2914]: E1212 17:26:10.666735 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:10.666819 kubelet[2914]: E1212 17:26:10.666784 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:10.667079 kubelet[2914]: E1212 17:26:10.667040 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gntvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-r69nw_calico-apiserver(482c1729-f8e9-4de5-b13e-dc7cf991c00d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:10.667174 containerd[1693]: time="2025-12-12T17:26:10.667085530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:26:10.668543 kubelet[2914]: E1212 17:26:10.668513 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:26:11.008359 containerd[1693]: time="2025-12-12T17:26:11.008234463Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:11.009803 containerd[1693]: time="2025-12-12T17:26:11.009747751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:26:11.009899 containerd[1693]: time="2025-12-12T17:26:11.009781591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:11.010009 kubelet[2914]: E1212 17:26:11.009972 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:26:11.010074 kubelet[2914]: E1212 17:26:11.010023 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:26:11.010188 kubelet[2914]: E1212 17:26:11.010146 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:11.011357 kubelet[2914]: E1212 17:26:11.011307 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:26:11.976115 containerd[1693]: time="2025-12-12T17:26:11.976066500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:26:12.315398 containerd[1693]: time="2025-12-12T17:26:12.315351544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:12.316684 containerd[1693]: time="2025-12-12T17:26:12.316636431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:26:12.316745 containerd[1693]: time="2025-12-12T17:26:12.316652391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:12.316946 kubelet[2914]: E1212 17:26:12.316889 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:26:12.317223 kubelet[2914]: E1212 17:26:12.316958 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:26:12.317223 kubelet[2914]: E1212 17:26:12.317092 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49bl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bb58f788f-rt28s_calico-system(4f0bba68-28b8-4297-9bde-8061a24d85f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:12.318538 kubelet[2914]: E1212 17:26:12.318503 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:26:13.975634 containerd[1693]: time="2025-12-12T17:26:13.975587260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:26:14.300956 containerd[1693]: time="2025-12-12T17:26:14.300664752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:14.302750 containerd[1693]: time="2025-12-12T17:26:14.302681722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:26:14.302898 containerd[1693]: time="2025-12-12T17:26:14.302729042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:14.302948 kubelet[2914]: E1212 17:26:14.302908 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:14.303408 kubelet[2914]: E1212 17:26:14.302965 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:14.303408 kubelet[2914]: E1212 17:26:14.303108 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htxpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-679rm_calico-apiserver(96a8850e-ec22-48ad-ae30-8b21eeca5a2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:14.304416 kubelet[2914]: E1212 17:26:14.304368 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:26:21.976657 kubelet[2914]: E1212 17:26:21.976555 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:26:22.976117 kubelet[2914]: E1212 17:26:22.976070 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:26:23.976042 kubelet[2914]: E1212 17:26:23.975948 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:26:23.976593 kubelet[2914]: E1212 17:26:23.976534 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:26:24.976959 kubelet[2914]: E1212 17:26:24.976702 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:26:25.975216 kubelet[2914]: E1212 17:26:25.975173 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:26:34.976678 containerd[1693]: time="2025-12-12T17:26:34.976616127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:26:35.320242 containerd[1693]: time="2025-12-12T17:26:35.320070872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:35.321390 containerd[1693]: time="2025-12-12T17:26:35.321280918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:26:35.321390 containerd[1693]: time="2025-12-12T17:26:35.321349318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:35.321563 kubelet[2914]: E1212 17:26:35.321522 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:35.322058 kubelet[2914]: E1212 17:26:35.321570 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:35.322058 kubelet[2914]: E1212 17:26:35.321784 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gntvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-r69nw_calico-apiserver(482c1729-f8e9-4de5-b13e-dc7cf991c00d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:35.322183 containerd[1693]: time="2025-12-12T17:26:35.321856681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:26:35.323573 kubelet[2914]: E1212 17:26:35.323534 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:26:35.655280 containerd[1693]: time="2025-12-12T17:26:35.655136454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:35.656577 containerd[1693]: time="2025-12-12T17:26:35.656531021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:26:35.656714 containerd[1693]: time="2025-12-12T17:26:35.656596702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:35.657227 kubelet[2914]: E1212 17:26:35.656954 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:26:35.657227 kubelet[2914]: E1212 17:26:35.657026 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:26:35.657227 kubelet[2914]: E1212 17:26:35.657166 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49bl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bb58f788f-rt28s_calico-system(4f0bba68-28b8-4297-9bde-8061a24d85f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:35.658862 kubelet[2914]: E1212 17:26:35.658567 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:26:36.978824 containerd[1693]: time="2025-12-12T17:26:36.978777540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:26:37.306094 containerd[1693]: time="2025-12-12T17:26:37.305843801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:37.307592 containerd[1693]: time="2025-12-12T17:26:37.307455090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:26:37.307592 containerd[1693]: time="2025-12-12T17:26:37.307487810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:37.307763 kubelet[2914]: E1212 17:26:37.307719 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:26:37.308149 kubelet[2914]: E1212 17:26:37.307772 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:26:37.308149 kubelet[2914]: E1212 17:26:37.308016 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsx5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pjdtp_calico-system(9c61390d-c42e-46b3-8a0d-2bc07904de15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:37.308435 containerd[1693]: time="2025-12-12T17:26:37.308387894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:26:37.309858 kubelet[2914]: E1212 17:26:37.309795 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:26:37.660166 containerd[1693]: time="2025-12-12T17:26:37.660116041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:37.663219 containerd[1693]: time="2025-12-12T17:26:37.663108577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:26:37.663219 containerd[1693]: time="2025-12-12T17:26:37.663207297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:37.663457 kubelet[2914]: E1212 17:26:37.663410 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:26:37.663723 kubelet[2914]: E1212 17:26:37.663465 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:26:37.663723 kubelet[2914]: E1212 17:26:37.663622 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2ec095a6f82141b6ae474647ccc0bac7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:37.667146 containerd[1693]: time="2025-12-12T17:26:37.667096597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:26:38.012722 containerd[1693]: time="2025-12-12T17:26:38.012483152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:38.016783 containerd[1693]: time="2025-12-12T17:26:38.016695293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:26:38.016929 containerd[1693]: time="2025-12-12T17:26:38.016737253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:38.017014 kubelet[2914]: E1212 17:26:38.016964 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:26:38.017090 kubelet[2914]: E1212 17:26:38.017014 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:26:38.017199 kubelet[2914]: E1212 17:26:38.017134 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:38.018556 kubelet[2914]: E1212 17:26:38.018503 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:26:38.976144 containerd[1693]: time="2025-12-12T17:26:38.976104728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:26:39.345034 containerd[1693]: time="2025-12-12T17:26:39.344950922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:39.346818 containerd[1693]: time="2025-12-12T17:26:39.346758851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:26:39.346928 containerd[1693]: time="2025-12-12T17:26:39.346842972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:39.347058 kubelet[2914]: E1212 17:26:39.347026 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:26:39.347422 kubelet[2914]: E1212 17:26:39.347072 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:26:39.347422 kubelet[2914]: E1212 17:26:39.347221 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:39.349479 containerd[1693]: time="2025-12-12T17:26:39.349450945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:26:39.697331 containerd[1693]: time="2025-12-12T17:26:39.697173272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:39.699168 containerd[1693]: time="2025-12-12T17:26:39.699106242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:26:39.699256 containerd[1693]: time="2025-12-12T17:26:39.699212002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:39.699444 kubelet[2914]: E1212 17:26:39.699384 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:26:39.699500 kubelet[2914]: E1212 17:26:39.699458 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:26:39.699622 kubelet[2914]: E1212 17:26:39.699582 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:39.700805 kubelet[2914]: E1212 17:26:39.700775 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:26:39.976633 containerd[1693]: time="2025-12-12T17:26:39.976501371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:26:40.525887 containerd[1693]: time="2025-12-12T17:26:40.525798962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:40.527536 containerd[1693]: time="2025-12-12T17:26:40.527475291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:26:40.527613 containerd[1693]: time="2025-12-12T17:26:40.527532851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:40.527751 kubelet[2914]: E1212 17:26:40.527705 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:40.527997 kubelet[2914]: E1212 17:26:40.527767 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:40.527997 kubelet[2914]: E1212 17:26:40.527927 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htxpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-679rm_calico-apiserver(96a8850e-ec22-48ad-ae30-8b21eeca5a2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:40.529358 kubelet[2914]: E1212 17:26:40.529325 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:26:47.976869 kubelet[2914]: E1212 17:26:47.976748 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:26:49.976016 kubelet[2914]: E1212 17:26:49.975891 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:26:50.976155 kubelet[2914]: E1212 17:26:50.976096 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:26:50.976794 kubelet[2914]: E1212 17:26:50.976176 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:26:50.976794 kubelet[2914]: E1212 17:26:50.976203 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:26:55.976261 kubelet[2914]: E1212 17:26:55.976082 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:26:58.976250 kubelet[2914]: E1212 17:26:58.976192 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:27:01.976502 kubelet[2914]: E1212 17:27:01.975705 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:27:02.976437 kubelet[2914]: E1212 17:27:02.976368 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:27:05.975946 kubelet[2914]: E1212 17:27:05.975712 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:27:05.975946 kubelet[2914]: E1212 17:27:05.975896 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:27:06.977542 kubelet[2914]: E1212 17:27:06.977368 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:27:12.975635 kubelet[2914]: E1212 17:27:12.975568 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:27:13.975560 kubelet[2914]: E1212 17:27:13.975446 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:27:14.773057 update_engine[1671]: I20251212 17:27:14.772934 1671 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 12 17:27:14.773057 update_engine[1671]: I20251212 17:27:14.772985 1671 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 12 17:27:14.773426 update_engine[1671]: I20251212 17:27:14.773211 1671 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 12 17:27:14.774200 update_engine[1671]: I20251212 17:27:14.773558 1671 omaha_request_params.cc:62] Current group set to beta Dec 12 17:27:14.774200 update_engine[1671]: I20251212 17:27:14.773654 1671 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 12 17:27:14.774200 update_engine[1671]: I20251212 17:27:14.773662 1671 update_attempter.cc:643] Scheduling an action processor start. Dec 12 17:27:14.774200 update_engine[1671]: I20251212 17:27:14.773675 1671 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 17:27:14.775304 update_engine[1671]: I20251212 17:27:14.774607 1671 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 12 17:27:14.775304 update_engine[1671]: I20251212 17:27:14.774684 1671 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 17:27:14.775304 update_engine[1671]: I20251212 17:27:14.774693 1671 omaha_request_action.cc:272] Request: Dec 12 17:27:14.775304 update_engine[1671]: Dec 12 17:27:14.775304 update_engine[1671]: Dec 12 17:27:14.775304 update_engine[1671]: Dec 12 17:27:14.775304 update_engine[1671]: Dec 12 17:27:14.775304 update_engine[1671]: Dec 12 17:27:14.775304 update_engine[1671]: Dec 12 17:27:14.775304 update_engine[1671]: Dec 12 17:27:14.775304 update_engine[1671]: Dec 12 17:27:14.775304 update_engine[1671]: I20251212 17:27:14.774699 1671 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:27:14.775598 locksmithd[1723]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 12 17:27:14.777336 update_engine[1671]: I20251212 17:27:14.777295 1671 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:27:14.778264 update_engine[1671]: I20251212 17:27:14.778228 1671 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:27:14.787501 update_engine[1671]: E20251212 17:27:14.787451 1671 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:27:14.787575 update_engine[1671]: I20251212 17:27:14.787544 1671 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 12 17:27:16.977937 kubelet[2914]: E1212 17:27:16.977868 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:27:17.975531 kubelet[2914]: E1212 17:27:17.975473 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:27:18.976464 kubelet[2914]: E1212 17:27:18.976385 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:27:18.977876 containerd[1693]: time="2025-12-12T17:27:18.976434610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:27:19.312951 containerd[1693]: time="2025-12-12T17:27:19.312656958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:19.314431 containerd[1693]: time="2025-12-12T17:27:19.314379007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:27:19.314925 containerd[1693]: time="2025-12-12T17:27:19.314413607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:19.314980 kubelet[2914]: E1212 17:27:19.314615 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:19.314980 kubelet[2914]: E1212 17:27:19.314660 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:19.314980 kubelet[2914]: E1212 17:27:19.314801 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsx5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pjdtp_calico-system(9c61390d-c42e-46b3-8a0d-2bc07904de15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:19.316060 kubelet[2914]: E1212 17:27:19.315995 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:27:24.704975 update_engine[1671]: I20251212 17:27:24.704869 1671 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:27:24.704975 update_engine[1671]: I20251212 17:27:24.704980 1671 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:27:24.705472 update_engine[1671]: I20251212 17:27:24.705307 1671 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:27:24.712446 update_engine[1671]: E20251212 17:27:24.712378 1671 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:27:24.712593 update_engine[1671]: I20251212 17:27:24.712477 1671 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 12 17:27:24.977553 containerd[1693]: time="2025-12-12T17:27:24.977430021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:27:25.301254 containerd[1693]: time="2025-12-12T17:27:25.301117466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:25.303135 containerd[1693]: time="2025-12-12T17:27:25.303063676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:27:25.304537 containerd[1693]: time="2025-12-12T17:27:25.303162996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:25.304615 kubelet[2914]: E1212 17:27:25.303355 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:25.304615 kubelet[2914]: E1212 17:27:25.303413 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:25.304615 kubelet[2914]: E1212 17:27:25.303547 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49bl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bb58f788f-rt28s_calico-system(4f0bba68-28b8-4297-9bde-8061a24d85f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:25.306086 kubelet[2914]: E1212 17:27:25.305232 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:27:26.975959 containerd[1693]: time="2025-12-12T17:27:26.975889655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:27.315468 containerd[1693]: time="2025-12-12T17:27:27.315428181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:27.317485 containerd[1693]: time="2025-12-12T17:27:27.317433671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:27.317679 containerd[1693]: time="2025-12-12T17:27:27.317494711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:27.317737 kubelet[2914]: E1212 17:27:27.317692 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:27.318120 kubelet[2914]: E1212 17:27:27.317747 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:27.318120 kubelet[2914]: E1212 17:27:27.317915 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gntvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-r69nw_calico-apiserver(482c1729-f8e9-4de5-b13e-dc7cf991c00d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:27.319136 kubelet[2914]: E1212 17:27:27.319072 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:27:28.976276 containerd[1693]: time="2025-12-12T17:27:28.976233499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:27:29.297731 containerd[1693]: time="2025-12-12T17:27:29.297349171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:29.301163 containerd[1693]: time="2025-12-12T17:27:29.301090950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:27:29.303075 containerd[1693]: time="2025-12-12T17:27:29.301179990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:29.303124 kubelet[2914]: E1212 17:27:29.301357 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:29.303124 kubelet[2914]: E1212 17:27:29.301409 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:29.303124 kubelet[2914]: E1212 17:27:29.301530 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2ec095a6f82141b6ae474647ccc0bac7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:29.304489 containerd[1693]: time="2025-12-12T17:27:29.304352846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:27:29.644357 containerd[1693]: time="2025-12-12T17:27:29.644310814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:29.645864 containerd[1693]: time="2025-12-12T17:27:29.645773261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:27:29.645938 containerd[1693]: time="2025-12-12T17:27:29.645863382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:29.646622 kubelet[2914]: E1212 17:27:29.646558 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:29.646622 kubelet[2914]: E1212 17:27:29.646614 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:29.646759 kubelet[2914]: E1212 17:27:29.646719 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:29.648226 kubelet[2914]: E1212 17:27:29.648011 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:27:30.976193 containerd[1693]: time="2025-12-12T17:27:30.975454697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:31.312499 containerd[1693]: time="2025-12-12T17:27:31.312252609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:31.313880 containerd[1693]: time="2025-12-12T17:27:31.313827737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:31.313980 containerd[1693]: time="2025-12-12T17:27:31.313852457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:31.314141 kubelet[2914]: E1212 17:27:31.314072 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:31.314947 kubelet[2914]: E1212 17:27:31.314282 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:31.314947 kubelet[2914]: E1212 17:27:31.314597 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htxpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-679rm_calico-apiserver(96a8850e-ec22-48ad-ae30-8b21eeca5a2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:31.315161 containerd[1693]: time="2025-12-12T17:27:31.315135263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:27:31.315910 kubelet[2914]: E1212 17:27:31.315869 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:27:31.641942 containerd[1693]: time="2025-12-12T17:27:31.641893363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:31.643529 containerd[1693]: time="2025-12-12T17:27:31.643484012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:27:31.643773 containerd[1693]: time="2025-12-12T17:27:31.643564892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:31.643819 kubelet[2914]: E1212 17:27:31.643713 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:31.643819 kubelet[2914]: E1212 17:27:31.643801 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:31.644115 kubelet[2914]: E1212 17:27:31.644039 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:31.647933 containerd[1693]: time="2025-12-12T17:27:31.647828954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:27:31.805345 systemd[1]: Started sshd@7-10.0.7.100:22-139.178.89.65:49984.service - OpenSSH per-connection server daemon (139.178.89.65:49984). Dec 12 17:27:31.806861 kernel: kauditd_printk_skb: 183 callbacks suppressed Dec 12 17:27:31.806954 kernel: audit: type=1130 audit(1765560451.804:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.7.100:22-139.178.89.65:49984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:31.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.7.100:22-139.178.89.65:49984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:31.991847 containerd[1693]: time="2025-12-12T17:27:31.991592580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:31.993456 containerd[1693]: time="2025-12-12T17:27:31.993330229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:27:31.993456 containerd[1693]: time="2025-12-12T17:27:31.993411870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:31.993634 kubelet[2914]: E1212 17:27:31.993578 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:31.993681 kubelet[2914]: E1212 17:27:31.993630 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:31.993869 kubelet[2914]: E1212 17:27:31.993786 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:31.995047 kubelet[2914]: E1212 17:27:31.994989 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:27:32.638000 audit[5253]: USER_ACCT pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:32.642670 sshd[5253]: Accepted publickey for core from 139.178.89.65 port 49984 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:27:32.642000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:32.646252 kernel: audit: type=1101 audit(1765560452.638:757): pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:32.646343 kernel: audit: type=1103 audit(1765560452.642:758): pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:32.643414 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:32.648269 kernel: audit: type=1006 audit(1765560452.642:759): pid=5253 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 12 17:27:32.642000 audit[5253]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffd3ea60 a2=3 a3=0 items=0 ppid=1 pid=5253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:32.642000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:32.653258 kernel: audit: type=1300 audit(1765560452.642:759): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffd3ea60 a2=3 a3=0 items=0 ppid=1 pid=5253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:32.653328 kernel: audit: type=1327 audit(1765560452.642:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:32.656936 systemd-logind[1668]: New session 8 of user core. Dec 12 17:27:32.668094 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:27:32.669000 audit[5253]: USER_START pid=5253 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:32.671000 audit[5256]: CRED_ACQ pid=5256 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:32.676344 kernel: audit: type=1105 audit(1765560452.669:760): pid=5253 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:32.676420 kernel: audit: type=1103 audit(1765560452.671:761): pid=5256 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:32.976595 kubelet[2914]: E1212 17:27:32.976474 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:27:33.199923 sshd[5256]: Connection closed by 139.178.89.65 port 49984 Dec 12 17:27:33.200352 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:33.200000 audit[5253]: USER_END pid=5253 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:33.204183 systemd[1]: sshd@7-10.0.7.100:22-139.178.89.65:49984.service: Deactivated successfully. Dec 12 17:27:33.200000 audit[5253]: CRED_DISP pid=5253 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:33.206079 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:27:33.207231 systemd-logind[1668]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:27:33.207780 kernel: audit: type=1106 audit(1765560453.200:762): pid=5253 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:33.207817 kernel: audit: type=1104 audit(1765560453.200:763): pid=5253 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:33.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.7.100:22-139.178.89.65:49984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:33.208535 systemd-logind[1668]: Removed session 8. Dec 12 17:27:34.697439 update_engine[1671]: I20251212 17:27:34.697363 1671 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:27:34.697841 update_engine[1671]: I20251212 17:27:34.697461 1671 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:27:34.697841 update_engine[1671]: I20251212 17:27:34.697803 1671 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:27:34.704192 update_engine[1671]: E20251212 17:27:34.704129 1671 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:27:34.704305 update_engine[1671]: I20251212 17:27:34.704222 1671 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 12 17:27:38.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.7.100:22-139.178.89.65:49992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:38.377572 systemd[1]: Started sshd@8-10.0.7.100:22-139.178.89.65:49992.service - OpenSSH per-connection server daemon (139.178.89.65:49992). Dec 12 17:27:38.378271 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:27:38.378298 kernel: audit: type=1130 audit(1765560458.376:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.7.100:22-139.178.89.65:49992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:39.206000 audit[5273]: USER_ACCT pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.207816 sshd[5273]: Accepted publickey for core from 139.178.89.65 port 49992 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:27:39.210000 audit[5273]: CRED_ACQ pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.213801 kernel: audit: type=1101 audit(1765560459.206:766): pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.213888 kernel: audit: type=1103 audit(1765560459.210:767): pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.211449 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:39.215789 kernel: audit: type=1006 audit(1765560459.210:768): pid=5273 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 12 17:27:39.210000 audit[5273]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffedccb0e0 a2=3 a3=0 items=0 ppid=1 pid=5273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:39.218902 kernel: audit: type=1300 audit(1765560459.210:768): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffedccb0e0 a2=3 a3=0 items=0 ppid=1 pid=5273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:39.218973 kernel: audit: type=1327 audit(1765560459.210:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:39.210000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:39.216465 systemd-logind[1668]: New session 9 of user core. Dec 12 17:27:39.222042 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:27:39.225000 audit[5273]: USER_START pid=5273 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.226000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.232465 kernel: audit: type=1105 audit(1765560459.225:769): pid=5273 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.232677 kernel: audit: type=1103 audit(1765560459.226:770): pid=5276 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.764409 sshd[5276]: Connection closed by 139.178.89.65 port 49992 Dec 12 17:27:39.765099 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:39.765000 audit[5273]: USER_END pid=5273 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.770767 systemd[1]: sshd@8-10.0.7.100:22-139.178.89.65:49992.service: Deactivated successfully. Dec 12 17:27:39.765000 audit[5273]: CRED_DISP pid=5273 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.773591 kernel: audit: type=1106 audit(1765560459.765:771): pid=5273 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.773688 kernel: audit: type=1104 audit(1765560459.765:772): pid=5273 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:39.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.7.100:22-139.178.89.65:49992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:39.774116 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:27:39.776162 systemd-logind[1668]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:27:39.777587 systemd-logind[1668]: Removed session 9. Dec 12 17:27:39.940643 systemd[1]: Started sshd@9-10.0.7.100:22-139.178.89.65:50006.service - OpenSSH per-connection server daemon (139.178.89.65:50006). Dec 12 17:27:39.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.7.100:22-139.178.89.65:50006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:39.975550 kubelet[2914]: E1212 17:27:39.975479 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:27:39.975550 kubelet[2914]: E1212 17:27:39.975548 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:27:40.799000 audit[5291]: USER_ACCT pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:40.800816 sshd[5291]: Accepted publickey for core from 139.178.89.65 port 50006 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:27:40.801000 audit[5291]: CRED_ACQ pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:40.801000 audit[5291]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe73cbab0 a2=3 a3=0 items=0 ppid=1 pid=5291 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:40.801000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:40.802428 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:40.809272 systemd-logind[1668]: New session 10 of user core. Dec 12 17:27:40.815101 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:27:40.819000 audit[5291]: USER_START pid=5291 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:40.821000 audit[5294]: CRED_ACQ pid=5294 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:40.976117 kubelet[2914]: E1212 17:27:40.976062 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:27:41.379862 sshd[5294]: Connection closed by 139.178.89.65 port 50006 Dec 12 17:27:41.380446 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:41.380000 audit[5291]: USER_END pid=5291 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:41.380000 audit[5291]: CRED_DISP pid=5291 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:41.384398 systemd[1]: sshd@9-10.0.7.100:22-139.178.89.65:50006.service: Deactivated successfully. Dec 12 17:27:41.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.7.100:22-139.178.89.65:50006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:41.386553 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:27:41.387608 systemd-logind[1668]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:27:41.389176 systemd-logind[1668]: Removed session 10. Dec 12 17:27:41.540681 systemd[1]: Started sshd@10-10.0.7.100:22-139.178.89.65:56728.service - OpenSSH per-connection server daemon (139.178.89.65:56728). Dec 12 17:27:41.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.7.100:22-139.178.89.65:56728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:42.356000 audit[5306]: USER_ACCT pid=5306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:42.357557 sshd[5306]: Accepted publickey for core from 139.178.89.65 port 56728 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:27:42.357000 audit[5306]: CRED_ACQ pid=5306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:42.357000 audit[5306]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcbc87fd0 a2=3 a3=0 items=0 ppid=1 pid=5306 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:42.357000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:42.358977 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:42.363886 systemd-logind[1668]: New session 11 of user core. Dec 12 17:27:42.371077 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:27:42.374000 audit[5306]: USER_START pid=5306 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:42.376000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:42.908383 sshd[5312]: Connection closed by 139.178.89.65 port 56728 Dec 12 17:27:42.909089 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:42.909000 audit[5306]: USER_END pid=5306 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:42.909000 audit[5306]: CRED_DISP pid=5306 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:42.913140 systemd-logind[1668]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:27:42.913382 systemd[1]: sshd@10-10.0.7.100:22-139.178.89.65:56728.service: Deactivated successfully. Dec 12 17:27:42.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.7.100:22-139.178.89.65:56728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:42.915578 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:27:42.917752 systemd-logind[1668]: Removed session 11. Dec 12 17:27:44.696957 update_engine[1671]: I20251212 17:27:44.696864 1671 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:27:44.697331 update_engine[1671]: I20251212 17:27:44.696969 1671 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:27:44.697566 update_engine[1671]: I20251212 17:27:44.697507 1671 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:27:44.704809 update_engine[1671]: E20251212 17:27:44.704756 1671 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:27:44.704886 update_engine[1671]: I20251212 17:27:44.704860 1671 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 17:27:44.704886 update_engine[1671]: I20251212 17:27:44.704871 1671 omaha_request_action.cc:617] Omaha request response: Dec 12 17:27:44.704995 update_engine[1671]: E20251212 17:27:44.704965 1671 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 12 17:27:44.704995 update_engine[1671]: I20251212 17:27:44.704991 1671 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 12 17:27:44.705044 update_engine[1671]: I20251212 17:27:44.704996 1671 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 17:27:44.705044 update_engine[1671]: I20251212 17:27:44.705001 1671 update_attempter.cc:306] Processing Done. Dec 12 17:27:44.705044 update_engine[1671]: E20251212 17:27:44.705016 1671 update_attempter.cc:619] Update failed. Dec 12 17:27:44.705044 update_engine[1671]: I20251212 17:27:44.705021 1671 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 12 17:27:44.705044 update_engine[1671]: I20251212 17:27:44.705025 1671 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 12 17:27:44.705044 update_engine[1671]: I20251212 17:27:44.705030 1671 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 12 17:27:44.705157 update_engine[1671]: I20251212 17:27:44.705093 1671 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 17:27:44.705157 update_engine[1671]: I20251212 17:27:44.705113 1671 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 17:27:44.705157 update_engine[1671]: I20251212 17:27:44.705118 1671 omaha_request_action.cc:272] Request: Dec 12 17:27:44.705157 update_engine[1671]: Dec 12 17:27:44.705157 update_engine[1671]: Dec 12 17:27:44.705157 update_engine[1671]: Dec 12 17:27:44.705157 update_engine[1671]: Dec 12 17:27:44.705157 update_engine[1671]: Dec 12 17:27:44.705157 update_engine[1671]: Dec 12 17:27:44.705157 update_engine[1671]: I20251212 17:27:44.705124 1671 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:27:44.705157 update_engine[1671]: I20251212 17:27:44.705147 1671 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:27:44.705407 locksmithd[1723]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 12 17:27:44.705684 update_engine[1671]: I20251212 17:27:44.705621 1671 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:27:44.712816 update_engine[1671]: E20251212 17:27:44.712762 1671 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:27:44.712879 update_engine[1671]: I20251212 17:27:44.712863 1671 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 17:27:44.712879 update_engine[1671]: I20251212 17:27:44.712873 1671 omaha_request_action.cc:617] Omaha request response: Dec 12 17:27:44.712948 update_engine[1671]: I20251212 17:27:44.712879 1671 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 17:27:44.712948 update_engine[1671]: I20251212 17:27:44.712884 1671 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 17:27:44.712948 update_engine[1671]: I20251212 17:27:44.712888 1671 update_attempter.cc:306] Processing Done. Dec 12 17:27:44.712948 update_engine[1671]: I20251212 17:27:44.712893 1671 update_attempter.cc:310] Error event sent. Dec 12 17:27:44.712948 update_engine[1671]: I20251212 17:27:44.712913 1671 update_check_scheduler.cc:74] Next update check in 49m15s Dec 12 17:27:44.713283 locksmithd[1723]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 12 17:27:44.976718 kubelet[2914]: E1212 17:27:44.976557 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:27:44.977317 kubelet[2914]: E1212 17:27:44.977157 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:27:45.975258 kubelet[2914]: E1212 17:27:45.975168 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:27:48.084544 systemd[1]: Started sshd@11-10.0.7.100:22-139.178.89.65:56734.service - OpenSSH per-connection server daemon (139.178.89.65:56734). Dec 12 17:27:48.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.7.100:22-139.178.89.65:56734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:48.087851 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 17:27:48.087915 kernel: audit: type=1130 audit(1765560468.083:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.7.100:22-139.178.89.65:56734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:48.918276 sshd[5330]: Accepted publickey for core from 139.178.89.65 port 56734 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:27:48.917000 audit[5330]: USER_ACCT pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:48.920000 audit[5330]: CRED_ACQ pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:48.922177 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:48.924684 kernel: audit: type=1101 audit(1765560468.917:793): pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:48.924742 kernel: audit: type=1103 audit(1765560468.920:794): pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:48.926479 kernel: audit: type=1006 audit(1765560468.920:795): pid=5330 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 12 17:27:48.926534 kernel: audit: type=1300 audit(1765560468.920:795): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe56e6790 a2=3 a3=0 items=0 ppid=1 pid=5330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:48.920000 audit[5330]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe56e6790 a2=3 a3=0 items=0 ppid=1 pid=5330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:48.920000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:48.930869 kernel: audit: type=1327 audit(1765560468.920:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:48.930431 systemd-logind[1668]: New session 12 of user core. Dec 12 17:27:48.937524 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:27:48.939000 audit[5330]: USER_START pid=5330 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:48.941000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:48.946229 kernel: audit: type=1105 audit(1765560468.939:796): pid=5330 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:48.946305 kernel: audit: type=1103 audit(1765560468.941:797): pid=5333 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:49.462334 sshd[5333]: Connection closed by 139.178.89.65 port 56734 Dec 12 17:27:49.461630 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:49.462000 audit[5330]: USER_END pid=5330 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:49.465714 systemd[1]: sshd@11-10.0.7.100:22-139.178.89.65:56734.service: Deactivated successfully. Dec 12 17:27:49.462000 audit[5330]: CRED_DISP pid=5330 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:49.467482 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:27:49.468230 systemd-logind[1668]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:27:49.468976 kernel: audit: type=1106 audit(1765560469.462:798): pid=5330 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:49.469040 kernel: audit: type=1104 audit(1765560469.462:799): pid=5330 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:49.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.7.100:22-139.178.89.65:56734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:49.469677 systemd-logind[1668]: Removed session 12. Dec 12 17:27:52.975758 kubelet[2914]: E1212 17:27:52.975698 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:27:53.976270 kubelet[2914]: E1212 17:27:53.976216 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:27:54.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.7.100:22-139.178.89.65:50058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:54.660680 systemd[1]: Started sshd@12-10.0.7.100:22-139.178.89.65:50058.service - OpenSSH per-connection server daemon (139.178.89.65:50058). Dec 12 17:27:54.661553 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:27:54.661592 kernel: audit: type=1130 audit(1765560474.659:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.7.100:22-139.178.89.65:50058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:54.976823 kubelet[2914]: E1212 17:27:54.976465 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:27:55.560000 audit[5370]: USER_ACCT pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:55.562029 sshd[5370]: Accepted publickey for core from 139.178.89.65 port 50058 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:27:55.564000 audit[5370]: CRED_ACQ pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:55.566163 sshd-session[5370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:55.568544 kernel: audit: type=1101 audit(1765560475.560:802): pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:55.568643 kernel: audit: type=1103 audit(1765560475.564:803): pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:55.568671 kernel: audit: type=1006 audit(1765560475.564:804): pid=5370 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 17:27:55.564000 audit[5370]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7baa910 a2=3 a3=0 items=0 ppid=1 pid=5370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:55.571350 systemd-logind[1668]: New session 13 of user core. Dec 12 17:27:55.573315 kernel: audit: type=1300 audit(1765560475.564:804): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7baa910 a2=3 a3=0 items=0 ppid=1 pid=5370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:55.573447 kernel: audit: type=1327 audit(1765560475.564:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:55.564000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:55.577037 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:27:55.579000 audit[5370]: USER_START pid=5370 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:55.580000 audit[5373]: CRED_ACQ pid=5373 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:55.585641 kernel: audit: type=1105 audit(1765560475.579:805): pid=5370 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:55.585752 kernel: audit: type=1103 audit(1765560475.580:806): pid=5373 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:56.146107 sshd[5373]: Connection closed by 139.178.89.65 port 50058 Dec 12 17:27:56.146514 sshd-session[5370]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:56.146000 audit[5370]: USER_END pid=5370 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:56.150804 systemd[1]: sshd@12-10.0.7.100:22-139.178.89.65:50058.service: Deactivated successfully. Dec 12 17:27:56.147000 audit[5370]: CRED_DISP pid=5370 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:56.151199 systemd-logind[1668]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:27:56.152969 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:27:56.153620 kernel: audit: type=1106 audit(1765560476.146:807): pid=5370 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:56.153687 kernel: audit: type=1104 audit(1765560476.147:808): pid=5370 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:27:56.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.7.100:22-139.178.89.65:50058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:56.155022 systemd-logind[1668]: Removed session 13. Dec 12 17:27:56.977633 kubelet[2914]: E1212 17:27:56.977577 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:27:58.976644 kubelet[2914]: E1212 17:27:58.976582 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:27:59.975372 kubelet[2914]: E1212 17:27:59.975297 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:28:01.319132 systemd[1]: Started sshd@13-10.0.7.100:22-139.178.89.65:59500.service - OpenSSH per-connection server daemon (139.178.89.65:59500). Dec 12 17:28:01.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.7.100:22-139.178.89.65:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:01.322794 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:01.322889 kernel: audit: type=1130 audit(1765560481.318:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.7.100:22-139.178.89.65:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:02.202000 audit[5387]: USER_ACCT pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.203736 sshd[5387]: Accepted publickey for core from 139.178.89.65 port 59500 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:02.205000 audit[5387]: CRED_ACQ pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.207449 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:02.209822 kernel: audit: type=1101 audit(1765560482.202:811): pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.209896 kernel: audit: type=1103 audit(1765560482.205:812): pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.209927 kernel: audit: type=1006 audit(1765560482.205:813): pid=5387 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 17:28:02.205000 audit[5387]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcade6c90 a2=3 a3=0 items=0 ppid=1 pid=5387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.213375 systemd-logind[1668]: New session 14 of user core. Dec 12 17:28:02.214325 kernel: audit: type=1300 audit(1765560482.205:813): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcade6c90 a2=3 a3=0 items=0 ppid=1 pid=5387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:02.205000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:02.217011 kernel: audit: type=1327 audit(1765560482.205:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:02.222098 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:28:02.223000 audit[5387]: USER_START pid=5387 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.227854 kernel: audit: type=1105 audit(1765560482.223:814): pid=5387 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.227000 audit[5390]: CRED_ACQ pid=5390 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.230874 kernel: audit: type=1103 audit(1765560482.227:815): pid=5390 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.784976 sshd[5390]: Connection closed by 139.178.89.65 port 59500 Dec 12 17:28:02.785345 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:02.785000 audit[5387]: USER_END pid=5387 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.790172 systemd[1]: sshd@13-10.0.7.100:22-139.178.89.65:59500.service: Deactivated successfully. Dec 12 17:28:02.785000 audit[5387]: CRED_DISP pid=5387 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.793770 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:28:02.794828 kernel: audit: type=1106 audit(1765560482.785:816): pid=5387 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.794996 kernel: audit: type=1104 audit(1765560482.785:817): pid=5387 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:02.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.7.100:22-139.178.89.65:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:02.798202 systemd-logind[1668]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:28:02.800247 systemd-logind[1668]: Removed session 14. Dec 12 17:28:05.975898 kubelet[2914]: E1212 17:28:05.975731 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:28:06.977525 kubelet[2914]: E1212 17:28:06.977466 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:28:07.960959 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:07.961032 kernel: audit: type=1130 audit(1765560487.958:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.7.100:22-139.178.89.65:59508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:07.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.7.100:22-139.178.89.65:59508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:07.959315 systemd[1]: Started sshd@14-10.0.7.100:22-139.178.89.65:59508.service - OpenSSH per-connection server daemon (139.178.89.65:59508). Dec 12 17:28:08.803000 audit[5406]: USER_ACCT pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:08.805031 sshd[5406]: Accepted publickey for core from 139.178.89.65 port 59508 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:08.807000 audit[5406]: CRED_ACQ pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:08.809054 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:08.811536 kernel: audit: type=1101 audit(1765560488.803:820): pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:08.811626 kernel: audit: type=1103 audit(1765560488.807:821): pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:08.813507 kernel: audit: type=1006 audit(1765560488.807:822): pid=5406 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 17:28:08.814003 kernel: audit: type=1300 audit(1765560488.807:822): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6450d90 a2=3 a3=0 items=0 ppid=1 pid=5406 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:08.807000 audit[5406]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6450d90 a2=3 a3=0 items=0 ppid=1 pid=5406 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:08.807000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:08.818388 kernel: audit: type=1327 audit(1765560488.807:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:08.820438 systemd-logind[1668]: New session 15 of user core. Dec 12 17:28:08.828187 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:28:08.829000 audit[5406]: USER_START pid=5406 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:08.833000 audit[5409]: CRED_ACQ pid=5409 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:08.837702 kernel: audit: type=1105 audit(1765560488.829:823): pid=5406 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:08.837781 kernel: audit: type=1103 audit(1765560488.833:824): pid=5409 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:09.345438 sshd[5409]: Connection closed by 139.178.89.65 port 59508 Dec 12 17:28:09.345691 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:09.346000 audit[5406]: USER_END pid=5406 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:09.346000 audit[5406]: CRED_DISP pid=5406 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:09.351870 systemd[1]: sshd@14-10.0.7.100:22-139.178.89.65:59508.service: Deactivated successfully. Dec 12 17:28:09.353610 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:28:09.353886 kernel: audit: type=1106 audit(1765560489.346:825): pid=5406 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:09.353936 kernel: audit: type=1104 audit(1765560489.346:826): pid=5406 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:09.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.7.100:22-139.178.89.65:59508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:09.354490 systemd-logind[1668]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:28:09.356056 systemd-logind[1668]: Removed session 15. Dec 12 17:28:09.976131 kubelet[2914]: E1212 17:28:09.976058 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:28:09.976523 kubelet[2914]: E1212 17:28:09.976351 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:28:10.975536 kubelet[2914]: E1212 17:28:10.975458 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:28:11.976060 kubelet[2914]: E1212 17:28:11.976009 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:28:14.508278 systemd[1]: Started sshd@15-10.0.7.100:22-139.178.89.65:38086.service - OpenSSH per-connection server daemon (139.178.89.65:38086). Dec 12 17:28:14.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.7.100:22-139.178.89.65:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:14.509032 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:14.509091 kernel: audit: type=1130 audit(1765560494.507:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.7.100:22-139.178.89.65:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:15.326000 audit[5425]: USER_ACCT pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.327638 sshd[5425]: Accepted publickey for core from 139.178.89.65 port 38086 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:15.329458 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:15.328000 audit[5425]: CRED_ACQ pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.334036 kernel: audit: type=1101 audit(1765560495.326:829): pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.334104 kernel: audit: type=1103 audit(1765560495.328:830): pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.334123 kernel: audit: type=1006 audit(1765560495.328:831): pid=5425 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 17:28:15.334828 kernel: audit: type=1300 audit(1765560495.328:831): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc8d8a10 a2=3 a3=0 items=0 ppid=1 pid=5425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.328000 audit[5425]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc8d8a10 a2=3 a3=0 items=0 ppid=1 pid=5425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:15.334977 systemd-logind[1668]: New session 16 of user core. Dec 12 17:28:15.328000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:15.338855 kernel: audit: type=1327 audit(1765560495.328:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:15.342105 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:28:15.343000 audit[5425]: USER_START pid=5425 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.345000 audit[5428]: CRED_ACQ pid=5428 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.351240 kernel: audit: type=1105 audit(1765560495.343:832): pid=5425 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.351302 kernel: audit: type=1103 audit(1765560495.345:833): pid=5428 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.860764 sshd[5428]: Connection closed by 139.178.89.65 port 38086 Dec 12 17:28:15.861345 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:15.861000 audit[5425]: USER_END pid=5425 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.865853 systemd-logind[1668]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:28:15.861000 audit[5425]: CRED_DISP pid=5425 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.866015 systemd[1]: sshd@15-10.0.7.100:22-139.178.89.65:38086.service: Deactivated successfully. Dec 12 17:28:15.868019 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:28:15.868976 kernel: audit: type=1106 audit(1765560495.861:834): pid=5425 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.869062 kernel: audit: type=1104 audit(1765560495.861:835): pid=5425 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:15.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.7.100:22-139.178.89.65:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:15.869616 systemd-logind[1668]: Removed session 16. Dec 12 17:28:17.975270 kubelet[2914]: E1212 17:28:17.974907 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:28:20.976407 kubelet[2914]: E1212 17:28:20.976333 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:28:20.976922 kubelet[2914]: E1212 17:28:20.976449 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:28:21.037893 systemd[1]: Started sshd@16-10.0.7.100:22-139.178.89.65:57722.service - OpenSSH per-connection server daemon (139.178.89.65:57722). Dec 12 17:28:21.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.7.100:22-139.178.89.65:57722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:21.038905 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:21.038950 kernel: audit: type=1130 audit(1765560501.037:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.7.100:22-139.178.89.65:57722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:21.870000 audit[5466]: USER_ACCT pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:21.871223 sshd[5466]: Accepted publickey for core from 139.178.89.65 port 57722 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:21.873494 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:21.872000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:21.876645 kernel: audit: type=1101 audit(1765560501.870:838): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:21.876724 kernel: audit: type=1103 audit(1765560501.872:839): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:21.876757 kernel: audit: type=1006 audit(1765560501.872:840): pid=5466 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 17:28:21.872000 audit[5466]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe13a3510 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:21.881080 systemd-logind[1668]: New session 17 of user core. Dec 12 17:28:21.881389 kernel: audit: type=1300 audit(1765560501.872:840): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe13a3510 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:21.881421 kernel: audit: type=1327 audit(1765560501.872:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:21.872000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:21.894172 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:28:21.895000 audit[5466]: USER_START pid=5466 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:21.899871 kernel: audit: type=1105 audit(1765560501.895:841): pid=5466 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:21.899000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:21.903904 kernel: audit: type=1103 audit(1765560501.899:842): pid=5469 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:22.416939 sshd[5469]: Connection closed by 139.178.89.65 port 57722 Dec 12 17:28:22.417573 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:22.418000 audit[5466]: USER_END pid=5466 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:22.423039 systemd[1]: sshd@16-10.0.7.100:22-139.178.89.65:57722.service: Deactivated successfully. Dec 12 17:28:22.418000 audit[5466]: CRED_DISP pid=5466 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:22.426875 kernel: audit: type=1106 audit(1765560502.418:843): pid=5466 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:22.426955 kernel: audit: type=1104 audit(1765560502.418:844): pid=5466 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:22.427770 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:28:22.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.7.100:22-139.178.89.65:57722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:22.429354 systemd-logind[1668]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:28:22.431823 systemd-logind[1668]: Removed session 17. Dec 12 17:28:22.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.7.100:22-139.178.89.65:57738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:22.593709 systemd[1]: Started sshd@17-10.0.7.100:22-139.178.89.65:57738.service - OpenSSH per-connection server daemon (139.178.89.65:57738). Dec 12 17:28:22.976397 kubelet[2914]: E1212 17:28:22.976346 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:28:23.442328 sshd[5483]: Accepted publickey for core from 139.178.89.65 port 57738 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:23.441000 audit[5483]: USER_ACCT pid=5483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:23.442000 audit[5483]: CRED_ACQ pid=5483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:23.442000 audit[5483]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce05cdb0 a2=3 a3=0 items=0 ppid=1 pid=5483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:23.442000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:23.443702 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:23.448772 systemd-logind[1668]: New session 18 of user core. Dec 12 17:28:23.458081 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:28:23.459000 audit[5483]: USER_START pid=5483 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:23.461000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:23.977870 kubelet[2914]: E1212 17:28:23.977805 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:28:24.046397 sshd[5486]: Connection closed by 139.178.89.65 port 57738 Dec 12 17:28:24.046725 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:24.047000 audit[5483]: USER_END pid=5483 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:24.047000 audit[5483]: CRED_DISP pid=5483 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:24.052224 systemd-logind[1668]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:28:24.053234 systemd[1]: sshd@17-10.0.7.100:22-139.178.89.65:57738.service: Deactivated successfully. Dec 12 17:28:24.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.7.100:22-139.178.89.65:57738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:24.055477 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:28:24.060620 systemd-logind[1668]: Removed session 18. Dec 12 17:28:24.215131 systemd[1]: Started sshd@18-10.0.7.100:22-139.178.89.65:57742.service - OpenSSH per-connection server daemon (139.178.89.65:57742). Dec 12 17:28:24.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.7.100:22-139.178.89.65:57742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:25.032000 audit[5498]: USER_ACCT pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:25.033659 sshd[5498]: Accepted publickey for core from 139.178.89.65 port 57742 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:25.033000 audit[5498]: CRED_ACQ pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:25.033000 audit[5498]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc186d1f0 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:25.033000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:25.034967 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:25.039378 systemd-logind[1668]: New session 19 of user core. Dec 12 17:28:25.045038 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:28:25.047000 audit[5498]: USER_START pid=5498 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:25.049000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:25.977490 kubelet[2914]: E1212 17:28:25.977410 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:28:26.006000 audit[5513]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:26.006000 audit[5513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffb9f1810 a2=0 a3=1 items=0 ppid=3033 pid=5513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:26.013000 audit[5513]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:26.013000 audit[5513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffb9f1810 a2=0 a3=1 items=0 ppid=3033 pid=5513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:26.038000 audit[5515]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:26.043537 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 12 17:28:26.043782 kernel: audit: type=1325 audit(1765560506.038:863): table=filter:144 family=2 entries=38 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:26.038000 audit[5515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc5fcf1a0 a2=0 a3=1 items=0 ppid=3033 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.047940 kernel: audit: type=1300 audit(1765560506.038:863): arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc5fcf1a0 a2=0 a3=1 items=0 ppid=3033 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:26.051731 kernel: audit: type=1327 audit(1765560506.038:863): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:26.051000 audit[5515]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:26.051000 audit[5515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc5fcf1a0 a2=0 a3=1 items=0 ppid=3033 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.057591 kernel: audit: type=1325 audit(1765560506.051:864): table=nat:145 family=2 entries=20 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:26.057683 kernel: audit: type=1300 audit(1765560506.051:864): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc5fcf1a0 a2=0 a3=1 items=0 ppid=3033 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.057714 kernel: audit: type=1327 audit(1765560506.051:864): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:26.051000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:26.176104 sshd[5501]: Connection closed by 139.178.89.65 port 57742 Dec 12 17:28:26.176535 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:26.178000 audit[5498]: USER_END pid=5498 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.178000 audit[5498]: CRED_DISP pid=5498 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.182690 systemd[1]: sshd@18-10.0.7.100:22-139.178.89.65:57742.service: Deactivated successfully. Dec 12 17:28:26.185520 kernel: audit: type=1106 audit(1765560506.178:865): pid=5498 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.185595 kernel: audit: type=1104 audit(1765560506.178:866): pid=5498 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.185618 kernel: audit: type=1131 audit(1765560506.182:867): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.7.100:22-139.178.89.65:57742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:26.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.7.100:22-139.178.89.65:57742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:26.185901 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:28:26.186884 systemd-logind[1668]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:28:26.188599 systemd-logind[1668]: Removed session 19. Dec 12 17:28:26.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.7.100:22-139.178.89.65:57744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:26.341392 systemd[1]: Started sshd@19-10.0.7.100:22-139.178.89.65:57744.service - OpenSSH per-connection server daemon (139.178.89.65:57744). Dec 12 17:28:26.344857 kernel: audit: type=1130 audit(1765560506.339:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.7.100:22-139.178.89.65:57744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:27.149000 audit[5521]: USER_ACCT pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.152019 sshd[5521]: Accepted publickey for core from 139.178.89.65 port 57744 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:27.152000 audit[5521]: CRED_ACQ pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.152000 audit[5521]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd88ca2f0 a2=3 a3=0 items=0 ppid=1 pid=5521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:27.152000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:27.153270 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:27.158470 systemd-logind[1668]: New session 20 of user core. Dec 12 17:28:27.167243 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:28:27.167000 audit[5521]: USER_START pid=5521 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.169000 audit[5524]: CRED_ACQ pid=5524 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.808921 sshd[5524]: Connection closed by 139.178.89.65 port 57744 Dec 12 17:28:27.808664 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:27.808000 audit[5521]: USER_END pid=5521 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.808000 audit[5521]: CRED_DISP pid=5521 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.813624 systemd[1]: sshd@19-10.0.7.100:22-139.178.89.65:57744.service: Deactivated successfully. Dec 12 17:28:27.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.7.100:22-139.178.89.65:57744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:27.816354 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:28:27.818878 systemd-logind[1668]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:28:27.822092 systemd-logind[1668]: Removed session 20. Dec 12 17:28:27.981109 systemd[1]: Started sshd@20-10.0.7.100:22-139.178.89.65:57758.service - OpenSSH per-connection server daemon (139.178.89.65:57758). Dec 12 17:28:27.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.7.100:22-139.178.89.65:57758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:28.812000 audit[5536]: USER_ACCT pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:28.814591 sshd[5536]: Accepted publickey for core from 139.178.89.65 port 57758 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:28.813000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:28.813000 audit[5536]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc40b1de0 a2=3 a3=0 items=0 ppid=1 pid=5536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:28.813000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:28.815821 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:28.821904 systemd-logind[1668]: New session 21 of user core. Dec 12 17:28:28.827019 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:28:28.828000 audit[5536]: USER_START pid=5536 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:28.830000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:28.977864 kubelet[2914]: E1212 17:28:28.977784 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:28:29.347859 sshd[5539]: Connection closed by 139.178.89.65 port 57758 Dec 12 17:28:29.349085 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:29.349000 audit[5536]: USER_END pid=5536 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:29.349000 audit[5536]: CRED_DISP pid=5536 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:29.354234 systemd-logind[1668]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:28:29.354535 systemd[1]: sshd@20-10.0.7.100:22-139.178.89.65:57758.service: Deactivated successfully. Dec 12 17:28:29.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.7.100:22-139.178.89.65:57758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:29.356562 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:28:29.358317 systemd-logind[1668]: Removed session 21. Dec 12 17:28:30.150000 audit[5552]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:30.150000 audit[5552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffc5a3880 a2=0 a3=1 items=0 ppid=3033 pid=5552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:30.150000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:30.156000 audit[5552]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=5552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:30.156000 audit[5552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffffc5a3880 a2=0 a3=1 items=0 ppid=3033 pid=5552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:30.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:31.975700 kubelet[2914]: E1212 17:28:31.975625 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:28:32.977980 kubelet[2914]: E1212 17:28:32.977936 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:28:34.518324 systemd[1]: Started sshd@21-10.0.7.100:22-139.178.89.65:32818.service - OpenSSH per-connection server daemon (139.178.89.65:32818). Dec 12 17:28:34.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.7.100:22-139.178.89.65:32818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:34.519376 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 12 17:28:34.519424 kernel: audit: type=1130 audit(1765560514.516:888): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.7.100:22-139.178.89.65:32818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:35.333000 audit[5560]: USER_ACCT pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.335589 sshd[5560]: Accepted publickey for core from 139.178.89.65 port 32818 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:35.337888 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:35.333000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.341145 kernel: audit: type=1101 audit(1765560515.333:889): pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.341413 kernel: audit: type=1103 audit(1765560515.333:890): pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.341441 kernel: audit: type=1006 audit(1765560515.333:891): pid=5560 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 17:28:35.333000 audit[5560]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd50d86e0 a2=3 a3=0 items=0 ppid=1 pid=5560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:35.346364 kernel: audit: type=1300 audit(1765560515.333:891): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd50d86e0 a2=3 a3=0 items=0 ppid=1 pid=5560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:35.346465 kernel: audit: type=1327 audit(1765560515.333:891): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:35.333000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:35.345355 systemd-logind[1668]: New session 22 of user core. Dec 12 17:28:35.351088 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:28:35.353000 audit[5560]: USER_START pid=5560 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.353000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.360474 kernel: audit: type=1105 audit(1765560515.353:892): pid=5560 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.360570 kernel: audit: type=1103 audit(1765560515.353:893): pid=5563 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.867579 sshd[5563]: Connection closed by 139.178.89.65 port 32818 Dec 12 17:28:35.867983 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:35.867000 audit[5560]: USER_END pid=5560 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.871869 systemd[1]: sshd@21-10.0.7.100:22-139.178.89.65:32818.service: Deactivated successfully. Dec 12 17:28:35.867000 audit[5560]: CRED_DISP pid=5560 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.873654 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:28:35.875597 kernel: audit: type=1106 audit(1765560515.867:894): pid=5560 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.875689 kernel: audit: type=1104 audit(1765560515.867:895): pid=5560 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.7.100:22-139.178.89.65:32818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:35.876306 systemd-logind[1668]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:28:35.877250 systemd-logind[1668]: Removed session 22. Dec 12 17:28:35.976235 kubelet[2914]: E1212 17:28:35.976186 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:28:37.976002 kubelet[2914]: E1212 17:28:37.975575 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:28:39.975365 kubelet[2914]: E1212 17:28:39.975301 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:28:39.976121 kubelet[2914]: E1212 17:28:39.975689 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:28:41.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.7.100:22-139.178.89.65:57608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:41.038510 systemd[1]: Started sshd@22-10.0.7.100:22-139.178.89.65:57608.service - OpenSSH per-connection server daemon (139.178.89.65:57608). Dec 12 17:28:41.039513 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:41.039570 kernel: audit: type=1130 audit(1765560521.036:897): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.7.100:22-139.178.89.65:57608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:41.870000 audit[5579]: USER_ACCT pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:41.872536 sshd[5579]: Accepted publickey for core from 139.178.89.65 port 57608 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:41.873000 audit[5579]: CRED_ACQ pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:41.876165 sshd-session[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:41.878349 kernel: audit: type=1101 audit(1765560521.870:898): pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:41.878437 kernel: audit: type=1103 audit(1765560521.873:899): pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:41.878457 kernel: audit: type=1006 audit(1765560521.873:900): pid=5579 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 17:28:41.880178 kernel: audit: type=1300 audit(1765560521.873:900): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb3f65d0 a2=3 a3=0 items=0 ppid=1 pid=5579 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:41.873000 audit[5579]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb3f65d0 a2=3 a3=0 items=0 ppid=1 pid=5579 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:41.873000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:41.884119 kernel: audit: type=1327 audit(1765560521.873:900): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:41.884120 systemd-logind[1668]: New session 23 of user core. Dec 12 17:28:41.894149 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:28:41.894000 audit[5579]: USER_START pid=5579 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:41.896000 audit[5582]: CRED_ACQ pid=5582 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:41.902132 kernel: audit: type=1105 audit(1765560521.894:901): pid=5579 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:41.902201 kernel: audit: type=1103 audit(1765560521.896:902): pid=5582 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:42.414070 sshd[5582]: Connection closed by 139.178.89.65 port 57608 Dec 12 17:28:42.414683 sshd-session[5579]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:42.414000 audit[5579]: USER_END pid=5579 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:42.418783 systemd[1]: sshd@22-10.0.7.100:22-139.178.89.65:57608.service: Deactivated successfully. Dec 12 17:28:42.414000 audit[5579]: CRED_DISP pid=5579 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:42.421308 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:28:42.421786 kernel: audit: type=1106 audit(1765560522.414:903): pid=5579 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:42.421885 kernel: audit: type=1104 audit(1765560522.414:904): pid=5579 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:42.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.7.100:22-139.178.89.65:57608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:42.422592 systemd-logind[1668]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:28:42.423629 systemd-logind[1668]: Removed session 23. Dec 12 17:28:46.978243 kubelet[2914]: E1212 17:28:46.978155 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:28:47.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.7.100:22-139.178.89.65:57624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:47.588671 systemd[1]: Started sshd@23-10.0.7.100:22-139.178.89.65:57624.service - OpenSSH per-connection server daemon (139.178.89.65:57624). Dec 12 17:28:47.591923 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:47.591978 kernel: audit: type=1130 audit(1765560527.586:906): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.7.100:22-139.178.89.65:57624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:47.975713 containerd[1693]: time="2025-12-12T17:28:47.975592193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:48.326793 containerd[1693]: time="2025-12-12T17:28:48.326682535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:48.328766 containerd[1693]: time="2025-12-12T17:28:48.328699305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:48.328855 containerd[1693]: time="2025-12-12T17:28:48.328740025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:48.329099 kubelet[2914]: E1212 17:28:48.329047 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:48.329423 kubelet[2914]: E1212 17:28:48.329103 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:48.329423 kubelet[2914]: E1212 17:28:48.329240 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49bl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bb58f788f-rt28s_calico-system(4f0bba68-28b8-4297-9bde-8061a24d85f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:48.330466 kubelet[2914]: E1212 17:28:48.330419 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:28:48.416000 audit[5597]: USER_ACCT pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.418474 sshd[5597]: Accepted publickey for core from 139.178.89.65 port 57624 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:48.419000 audit[5597]: CRED_ACQ pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.422095 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:48.424477 kernel: audit: type=1101 audit(1765560528.416:907): pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.424586 kernel: audit: type=1103 audit(1765560528.419:908): pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.424604 kernel: audit: type=1006 audit(1765560528.419:909): pid=5597 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 17:28:48.426079 kernel: audit: type=1300 audit(1765560528.419:909): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd04d990 a2=3 a3=0 items=0 ppid=1 pid=5597 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:48.419000 audit[5597]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd04d990 a2=3 a3=0 items=0 ppid=1 pid=5597 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:48.426713 systemd-logind[1668]: New session 24 of user core. Dec 12 17:28:48.419000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:48.429855 kernel: audit: type=1327 audit(1765560528.419:909): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:48.435074 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:28:48.435000 audit[5597]: USER_START pid=5597 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.436000 audit[5600]: CRED_ACQ pid=5600 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.443557 kernel: audit: type=1105 audit(1765560528.435:910): pid=5597 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.443632 kernel: audit: type=1103 audit(1765560528.436:911): pid=5600 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.952016 sshd[5600]: Connection closed by 139.178.89.65 port 57624 Dec 12 17:28:48.953642 sshd-session[5597]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:48.952000 audit[5597]: USER_END pid=5597 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.957968 systemd[1]: sshd@23-10.0.7.100:22-139.178.89.65:57624.service: Deactivated successfully. Dec 12 17:28:48.961461 kernel: audit: type=1106 audit(1765560528.952:912): pid=5597 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.961494 kernel: audit: type=1104 audit(1765560528.953:913): pid=5597 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.953000 audit[5597]: CRED_DISP pid=5597 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:48.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.7.100:22-139.178.89.65:57624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:48.958093 systemd-logind[1668]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:28:48.959778 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:28:48.964294 systemd-logind[1668]: Removed session 24. Dec 12 17:28:48.977237 containerd[1693]: time="2025-12-12T17:28:48.977200477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:28:49.322860 containerd[1693]: time="2025-12-12T17:28:49.322736031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:49.324540 containerd[1693]: time="2025-12-12T17:28:49.324491080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:28:49.324600 containerd[1693]: time="2025-12-12T17:28:49.324554441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:49.324738 kubelet[2914]: E1212 17:28:49.324692 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:49.324803 kubelet[2914]: E1212 17:28:49.324739 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:49.324975 kubelet[2914]: E1212 17:28:49.324891 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsx5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pjdtp_calico-system(9c61390d-c42e-46b3-8a0d-2bc07904de15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:49.326269 kubelet[2914]: E1212 17:28:49.326222 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:28:49.976201 containerd[1693]: time="2025-12-12T17:28:49.976162629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:28:50.324080 containerd[1693]: time="2025-12-12T17:28:50.323997915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:50.326318 containerd[1693]: time="2025-12-12T17:28:50.326272366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:28:50.326444 containerd[1693]: time="2025-12-12T17:28:50.326304966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:50.326523 kubelet[2914]: E1212 17:28:50.326478 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:50.326940 kubelet[2914]: E1212 17:28:50.326526 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:50.326940 kubelet[2914]: E1212 17:28:50.326639 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2ec095a6f82141b6ae474647ccc0bac7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:50.328693 containerd[1693]: time="2025-12-12T17:28:50.328594458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:28:50.665015 containerd[1693]: time="2025-12-12T17:28:50.664029002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:50.666476 containerd[1693]: time="2025-12-12T17:28:50.666415894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:28:50.666553 containerd[1693]: time="2025-12-12T17:28:50.666516695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:50.666753 kubelet[2914]: E1212 17:28:50.666712 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:50.666849 kubelet[2914]: E1212 17:28:50.666762 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:50.666937 kubelet[2914]: E1212 17:28:50.666893 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6568b8ffc6-2jn5v_calico-system(1b458fc3-1075-4699-aeb2-3b97525fad3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:50.668165 kubelet[2914]: E1212 17:28:50.668116 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:28:52.975972 containerd[1693]: time="2025-12-12T17:28:52.975922189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:53.315521 containerd[1693]: time="2025-12-12T17:28:53.315466794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:53.317084 containerd[1693]: time="2025-12-12T17:28:53.317041962Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:53.317189 containerd[1693]: time="2025-12-12T17:28:53.317127202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:53.317299 kubelet[2914]: E1212 17:28:53.317262 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:53.318005 kubelet[2914]: E1212 17:28:53.317310 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:53.318005 kubelet[2914]: E1212 17:28:53.317600 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:53.318291 containerd[1693]: time="2025-12-12T17:28:53.317581885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:53.649332 containerd[1693]: time="2025-12-12T17:28:53.649180370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:53.650885 containerd[1693]: time="2025-12-12T17:28:53.650816938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:53.650986 containerd[1693]: time="2025-12-12T17:28:53.650857138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:53.651132 kubelet[2914]: E1212 17:28:53.651068 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:53.651185 kubelet[2914]: E1212 17:28:53.651142 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:53.651711 containerd[1693]: time="2025-12-12T17:28:53.651520382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:53.651773 kubelet[2914]: E1212 17:28:53.651430 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gntvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-r69nw_calico-apiserver(482c1729-f8e9-4de5-b13e-dc7cf991c00d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:53.652696 kubelet[2914]: E1212 17:28:53.652660 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:28:53.989554 containerd[1693]: time="2025-12-12T17:28:53.989304578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:53.991351 containerd[1693]: time="2025-12-12T17:28:53.991291468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:53.991437 containerd[1693]: time="2025-12-12T17:28:53.991363708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:53.991572 kubelet[2914]: E1212 17:28:53.991526 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:53.991623 kubelet[2914]: E1212 17:28:53.991575 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:53.992073 kubelet[2914]: E1212 17:28:53.991697 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nmqkr_calico-system(38591688-bf7e-4006-99b5-49217c275f18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:53.993282 kubelet[2914]: E1212 17:28:53.993239 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:28:54.120720 systemd[1]: Started sshd@24-10.0.7.100:22-139.178.89.65:48168.service - OpenSSH per-connection server daemon (139.178.89.65:48168). Dec 12 17:28:54.122422 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:54.122461 kernel: audit: type=1130 audit(1765560534.118:915): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.7.100:22-139.178.89.65:48168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:54.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.7.100:22-139.178.89.65:48168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:54.948000 audit[5640]: USER_ACCT pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:54.950991 sshd[5640]: Accepted publickey for core from 139.178.89.65 port 48168 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:54.953955 kernel: audit: type=1101 audit(1765560534.948:916): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:54.954094 kernel: audit: type=1103 audit(1765560534.952:917): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:54.952000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:54.954326 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:54.958082 kernel: audit: type=1006 audit(1765560534.952:918): pid=5640 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 12 17:28:54.958226 kernel: audit: type=1300 audit(1765560534.952:918): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe018adc0 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:54.952000 audit[5640]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe018adc0 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:54.958933 systemd-logind[1668]: New session 25 of user core. Dec 12 17:28:54.952000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:54.961886 kernel: audit: type=1327 audit(1765560534.952:918): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:54.973310 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:28:54.973000 audit[5640]: USER_START pid=5640 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:54.975000 audit[5643]: CRED_ACQ pid=5643 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:54.981816 kernel: audit: type=1105 audit(1765560534.973:919): pid=5640 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:54.981952 kernel: audit: type=1103 audit(1765560534.975:920): pid=5643 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:55.483681 sshd[5643]: Connection closed by 139.178.89.65 port 48168 Dec 12 17:28:55.484019 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:55.483000 audit[5640]: USER_END pid=5640 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:55.488139 systemd[1]: sshd@24-10.0.7.100:22-139.178.89.65:48168.service: Deactivated successfully. Dec 12 17:28:55.483000 audit[5640]: CRED_DISP pid=5640 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:55.490284 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:28:55.491430 kernel: audit: type=1106 audit(1765560535.483:921): pid=5640 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:55.491525 kernel: audit: type=1104 audit(1765560535.483:922): pid=5640 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:55.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.7.100:22-139.178.89.65:48168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:55.491314 systemd-logind[1668]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:28:55.492543 systemd-logind[1668]: Removed session 25. Dec 12 17:28:58.976525 containerd[1693]: time="2025-12-12T17:28:58.976299742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:59.332179 containerd[1693]: time="2025-12-12T17:28:59.332132670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:59.334071 containerd[1693]: time="2025-12-12T17:28:59.333800758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:59.334179 containerd[1693]: time="2025-12-12T17:28:59.333853279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:59.334409 kubelet[2914]: E1212 17:28:59.334317 2914 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:59.335127 kubelet[2914]: E1212 17:28:59.334445 2914 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:59.335127 kubelet[2914]: E1212 17:28:59.334940 2914 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htxpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7dfdfddc-679rm_calico-apiserver(96a8850e-ec22-48ad-ae30-8b21eeca5a2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:59.336336 kubelet[2914]: E1212 17:28:59.336155 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c" Dec 12 17:29:00.976848 kubelet[2914]: E1212 17:29:00.976474 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:29:00.977935 kubelet[2914]: E1212 17:29:00.977661 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bb58f788f-rt28s" podUID="4f0bba68-28b8-4297-9bde-8061a24d85f6" Dec 12 17:29:02.976463 kubelet[2914]: E1212 17:29:02.976076 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pjdtp" podUID="9c61390d-c42e-46b3-8a0d-2bc07904de15" Dec 12 17:29:03.976307 kubelet[2914]: E1212 17:29:03.976247 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-r69nw" podUID="482c1729-f8e9-4de5-b13e-dc7cf991c00d" Dec 12 17:29:05.976520 kubelet[2914]: E1212 17:29:05.976267 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nmqkr" podUID="38591688-bf7e-4006-99b5-49217c275f18" Dec 12 17:29:10.086967 kernel: pcieport 0000:00:01.0: pciehp: Slot(0): Button press: will power off in 5 sec Dec 12 17:29:11.979462 kubelet[2914]: E1212 17:29:11.978712 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6568b8ffc6-2jn5v" podUID="1b458fc3-1075-4699-aeb2-3b97525fad3c" Dec 12 17:29:13.975506 kubelet[2914]: E1212 17:29:13.975433 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7dfdfddc-679rm" podUID="96a8850e-ec22-48ad-ae30-8b21eeca5a2c"