Dec 16 12:24:39.428548 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:24:39.428573 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 16 12:24:39.428584 kernel: KASLR enabled Dec 16 12:24:39.428590 kernel: efi: EFI v2.7 by EDK II Dec 16 12:24:39.428596 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438351218 Dec 16 12:24:39.428602 kernel: random: crng init done Dec 16 12:24:39.428609 kernel: secureboot: Secure boot disabled Dec 16 12:24:39.428615 kernel: ACPI: Early table checksum verification disabled Dec 16 12:24:39.428622 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Dec 16 12:24:39.428630 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:24:39.428636 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428642 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428649 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428655 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428664 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428671 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428677 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428684 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428691 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428697 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:24:39.428704 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 12:24:39.428710 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 16 12:24:39.428717 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:24:39.428724 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Dec 16 12:24:39.428731 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Dec 16 12:24:39.428738 kernel: Zone ranges: Dec 16 12:24:39.428744 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 16 12:24:39.428751 kernel: DMA32 empty Dec 16 12:24:39.428757 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Dec 16 12:24:39.428764 kernel: Device empty Dec 16 12:24:39.428770 kernel: Movable zone start for each node Dec 16 12:24:39.428777 kernel: Early memory node ranges Dec 16 12:24:39.428783 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Dec 16 12:24:39.428790 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Dec 16 12:24:39.428797 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Dec 16 12:24:39.428804 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Dec 16 12:24:39.428811 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Dec 16 12:24:39.428817 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Dec 16 12:24:39.428824 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Dec 16 12:24:39.428831 kernel: psci: probing for conduit method from ACPI. Dec 16 12:24:39.428840 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:24:39.428848 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:24:39.428855 kernel: psci: Trusted OS migration not required Dec 16 12:24:39.428862 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:24:39.428869 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:24:39.428876 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:24:39.428883 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:24:39.428890 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Dec 16 12:24:39.428897 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Dec 16 12:24:39.428905 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:24:39.428912 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:24:39.428920 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:24:39.428927 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:24:39.428934 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:24:39.428941 kernel: CPU features: detected: Spectre-v4 Dec 16 12:24:39.428948 kernel: CPU features: detected: Spectre-BHB Dec 16 12:24:39.428955 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:24:39.428962 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:24:39.428969 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:24:39.428976 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:24:39.428984 kernel: alternatives: applying boot alternatives Dec 16 12:24:39.428992 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:24:39.429000 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 16 12:24:39.429007 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 12:24:39.429014 kernel: Fallback order for Node 0: 0 Dec 16 12:24:39.429021 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Dec 16 12:24:39.429028 kernel: Policy zone: Normal Dec 16 12:24:39.429034 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:24:39.429041 kernel: software IO TLB: area num 4. Dec 16 12:24:39.429048 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Dec 16 12:24:39.429057 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:24:39.429064 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:24:39.429072 kernel: rcu: RCU event tracing is enabled. Dec 16 12:24:39.429079 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:24:39.429086 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:24:39.429093 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:24:39.429100 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:24:39.429107 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:24:39.429114 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:24:39.429121 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:24:39.429128 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:24:39.429137 kernel: GICv3: 256 SPIs implemented Dec 16 12:24:39.429144 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:24:39.429151 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:24:39.429157 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:24:39.429164 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:24:39.429171 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:24:39.429178 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:24:39.429185 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:24:39.429193 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:24:39.429200 kernel: GICv3: using LPI property table @0x0000000100130000 Dec 16 12:24:39.429207 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Dec 16 12:24:39.429214 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:24:39.429222 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:24:39.429229 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:24:39.429236 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:24:39.429243 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:24:39.429250 kernel: arm-pv: using stolen time PV Dec 16 12:24:39.429258 kernel: Console: colour dummy device 80x25 Dec 16 12:24:39.429265 kernel: ACPI: Core revision 20240827 Dec 16 12:24:39.429273 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:24:39.429282 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:24:39.429302 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:24:39.429310 kernel: landlock: Up and running. Dec 16 12:24:39.429318 kernel: SELinux: Initializing. Dec 16 12:24:39.429325 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:24:39.429333 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:24:39.429340 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:24:39.429348 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:24:39.429358 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:24:39.429366 kernel: Remapping and enabling EFI services. Dec 16 12:24:39.429373 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:24:39.429381 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:24:39.429388 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:24:39.429396 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Dec 16 12:24:39.429404 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:24:39.429412 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:24:39.429420 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:24:39.429432 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:24:39.429441 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Dec 16 12:24:39.429449 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:24:39.429456 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:24:39.429464 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:24:39.429472 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:24:39.429481 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Dec 16 12:24:39.429489 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:24:39.429497 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:24:39.429504 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:24:39.429512 kernel: SMP: Total of 4 processors activated. Dec 16 12:24:39.429520 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:24:39.429529 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:24:39.429537 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:24:39.429544 kernel: CPU features: detected: Common not Private translations Dec 16 12:24:39.429552 kernel: CPU features: detected: CRC32 instructions Dec 16 12:24:39.429560 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:24:39.429567 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:24:39.429575 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:24:39.429584 kernel: CPU features: detected: Privileged Access Never Dec 16 12:24:39.429592 kernel: CPU features: detected: RAS Extension Support Dec 16 12:24:39.429600 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:24:39.429608 kernel: alternatives: applying system-wide alternatives Dec 16 12:24:39.429615 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:24:39.429624 kernel: Memory: 16324496K/16777216K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 429936K reserved, 16384K cma-reserved) Dec 16 12:24:39.429632 kernel: devtmpfs: initialized Dec 16 12:24:39.429641 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:24:39.429650 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:24:39.429657 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:24:39.429665 kernel: 0 pages in range for non-PLT usage Dec 16 12:24:39.429673 kernel: 515184 pages in range for PLT usage Dec 16 12:24:39.429680 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:24:39.429688 kernel: SMBIOS 3.0.0 present. Dec 16 12:24:39.429696 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 16 12:24:39.429705 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:24:39.429712 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:24:39.429720 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:24:39.429729 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:24:39.429740 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:24:39.429748 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:24:39.429756 kernel: audit: type=2000 audit(0.039:1): state=initialized audit_enabled=0 res=1 Dec 16 12:24:39.429767 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:24:39.429775 kernel: cpuidle: using governor menu Dec 16 12:24:39.429782 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:24:39.429790 kernel: ASID allocator initialised with 32768 entries Dec 16 12:24:39.429798 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:24:39.429806 kernel: Serial: AMBA PL011 UART driver Dec 16 12:24:39.429814 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:24:39.429824 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:24:39.429832 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:24:39.429840 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:24:39.429848 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:24:39.429856 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:24:39.429863 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:24:39.429871 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:24:39.429880 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:24:39.429888 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:24:39.429896 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:24:39.429904 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:24:39.429912 kernel: ACPI: Interpreter enabled Dec 16 12:24:39.429920 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:24:39.429927 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:24:39.429935 kernel: ACPI: CPU0 has been hot-added Dec 16 12:24:39.429945 kernel: ACPI: CPU1 has been hot-added Dec 16 12:24:39.429952 kernel: ACPI: CPU2 has been hot-added Dec 16 12:24:39.429960 kernel: ACPI: CPU3 has been hot-added Dec 16 12:24:39.429968 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:24:39.429976 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:24:39.429984 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:24:39.430179 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:24:39.430275 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:24:39.430400 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:24:39.430487 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:24:39.430569 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:24:39.430580 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:24:39.430588 kernel: PCI host bridge to bus 0000:00 Dec 16 12:24:39.430682 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:24:39.430779 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:24:39.430856 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:24:39.430933 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:24:39.431037 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:24:39.431136 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.431225 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Dec 16 12:24:39.431334 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 16 12:24:39.431425 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Dec 16 12:24:39.431510 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Dec 16 12:24:39.431623 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.431779 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Dec 16 12:24:39.431871 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 16 12:24:39.431954 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Dec 16 12:24:39.432053 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.432137 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Dec 16 12:24:39.432223 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 16 12:24:39.432336 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Dec 16 12:24:39.432435 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Dec 16 12:24:39.432528 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.432610 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Dec 16 12:24:39.432693 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 16 12:24:39.432779 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Dec 16 12:24:39.432870 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.432953 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Dec 16 12:24:39.433037 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 16 12:24:39.433118 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Dec 16 12:24:39.433200 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Dec 16 12:24:39.433307 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.433408 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Dec 16 12:24:39.433493 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 16 12:24:39.433575 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Dec 16 12:24:39.433662 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Dec 16 12:24:39.433754 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.433848 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Dec 16 12:24:39.433940 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 16 12:24:39.434033 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.434117 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Dec 16 12:24:39.434199 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 16 12:24:39.434298 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.434411 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Dec 16 12:24:39.434498 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 16 12:24:39.434591 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.434676 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Dec 16 12:24:39.434786 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 16 12:24:39.434890 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.434975 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Dec 16 12:24:39.435058 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 16 12:24:39.435147 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.435230 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Dec 16 12:24:39.435340 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 16 12:24:39.435439 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.435524 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Dec 16 12:24:39.435612 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 16 12:24:39.435710 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.435801 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Dec 16 12:24:39.435891 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 16 12:24:39.435983 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.436068 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Dec 16 12:24:39.436156 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 16 12:24:39.436248 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.436351 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Dec 16 12:24:39.436437 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 16 12:24:39.436530 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.436614 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Dec 16 12:24:39.436698 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 16 12:24:39.436791 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.436879 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Dec 16 12:24:39.436962 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 16 12:24:39.437044 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Dec 16 12:24:39.437127 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Dec 16 12:24:39.437216 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.439627 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Dec 16 12:24:39.439767 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 16 12:24:39.439853 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Dec 16 12:24:39.439939 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Dec 16 12:24:39.440045 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.440131 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Dec 16 12:24:39.440219 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 16 12:24:39.440339 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Dec 16 12:24:39.440440 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Dec 16 12:24:39.440549 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.440638 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Dec 16 12:24:39.440724 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 16 12:24:39.440808 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Dec 16 12:24:39.440899 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Dec 16 12:24:39.440994 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.441097 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Dec 16 12:24:39.441182 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 16 12:24:39.441269 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Dec 16 12:24:39.441392 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Dec 16 12:24:39.441495 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.441583 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Dec 16 12:24:39.441677 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 16 12:24:39.441763 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Dec 16 12:24:39.441849 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Dec 16 12:24:39.441939 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.442028 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Dec 16 12:24:39.442113 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 16 12:24:39.442195 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Dec 16 12:24:39.442315 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Dec 16 12:24:39.442412 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.442498 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Dec 16 12:24:39.442585 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 16 12:24:39.442675 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Dec 16 12:24:39.442772 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Dec 16 12:24:39.442866 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.442953 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Dec 16 12:24:39.443039 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 16 12:24:39.443121 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Dec 16 12:24:39.443206 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Dec 16 12:24:39.443335 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.443429 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Dec 16 12:24:39.443513 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 16 12:24:39.443599 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Dec 16 12:24:39.443681 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Dec 16 12:24:39.443772 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.443856 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Dec 16 12:24:39.443939 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 16 12:24:39.444020 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Dec 16 12:24:39.444104 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Dec 16 12:24:39.444202 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.444286 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Dec 16 12:24:39.444390 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 16 12:24:39.444478 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Dec 16 12:24:39.444561 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Dec 16 12:24:39.444653 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.444737 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Dec 16 12:24:39.444820 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 16 12:24:39.444904 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Dec 16 12:24:39.444992 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Dec 16 12:24:39.445084 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.445170 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Dec 16 12:24:39.445274 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 16 12:24:39.446352 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Dec 16 12:24:39.446497 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Dec 16 12:24:39.446618 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.446712 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Dec 16 12:24:39.446820 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 16 12:24:39.446906 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Dec 16 12:24:39.446991 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Dec 16 12:24:39.447083 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:24:39.447173 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Dec 16 12:24:39.447268 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 16 12:24:39.447377 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Dec 16 12:24:39.447464 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Dec 16 12:24:39.447561 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 16 12:24:39.447660 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Dec 16 12:24:39.447749 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:24:39.447834 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 16 12:24:39.447934 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 12:24:39.448027 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Dec 16 12:24:39.448131 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Dec 16 12:24:39.448220 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Dec 16 12:24:39.448323 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Dec 16 12:24:39.448421 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 16 12:24:39.448522 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Dec 16 12:24:39.448623 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 12:24:39.448726 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Dec 16 12:24:39.448816 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Dec 16 12:24:39.448916 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Dec 16 12:24:39.449017 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Dec 16 12:24:39.449105 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Dec 16 12:24:39.449192 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 16 12:24:39.449281 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 16 12:24:39.450517 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 16 12:24:39.450618 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 16 12:24:39.450731 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 16 12:24:39.450833 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 16 12:24:39.450921 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 12:24:39.451006 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 16 12:24:39.451088 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 16 12:24:39.451176 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 12:24:39.451263 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 16 12:24:39.451400 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 16 12:24:39.451496 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 12:24:39.451582 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 16 12:24:39.451666 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Dec 16 12:24:39.451756 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 12:24:39.451845 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 16 12:24:39.451928 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 16 12:24:39.452016 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 12:24:39.452099 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Dec 16 12:24:39.452182 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Dec 16 12:24:39.452271 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 12:24:39.452378 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 16 12:24:39.452464 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 16 12:24:39.452552 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 12:24:39.452636 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 16 12:24:39.452749 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 16 12:24:39.452849 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 16 12:24:39.452935 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Dec 16 12:24:39.453026 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Dec 16 12:24:39.453115 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Dec 16 12:24:39.453201 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 16 12:24:39.453285 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Dec 16 12:24:39.453397 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Dec 16 12:24:39.453483 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Dec 16 12:24:39.453573 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Dec 16 12:24:39.453669 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Dec 16 12:24:39.453771 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Dec 16 12:24:39.453858 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Dec 16 12:24:39.453954 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 16 12:24:39.454039 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Dec 16 12:24:39.454122 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Dec 16 12:24:39.454211 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 16 12:24:39.454307 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Dec 16 12:24:39.454400 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Dec 16 12:24:39.454491 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 16 12:24:39.454576 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Dec 16 12:24:39.454661 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Dec 16 12:24:39.454770 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 16 12:24:39.454873 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Dec 16 12:24:39.454963 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Dec 16 12:24:39.455056 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 16 12:24:39.455145 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Dec 16 12:24:39.455247 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Dec 16 12:24:39.455356 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Dec 16 12:24:39.455454 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Dec 16 12:24:39.455539 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Dec 16 12:24:39.455626 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Dec 16 12:24:39.455710 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Dec 16 12:24:39.455793 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Dec 16 12:24:39.455881 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Dec 16 12:24:39.455968 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Dec 16 12:24:39.456053 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Dec 16 12:24:39.456141 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 16 12:24:39.456235 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Dec 16 12:24:39.456340 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Dec 16 12:24:39.456441 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 16 12:24:39.456529 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Dec 16 12:24:39.456612 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Dec 16 12:24:39.456702 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 16 12:24:39.456787 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Dec 16 12:24:39.456890 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Dec 16 12:24:39.456984 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 16 12:24:39.457070 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Dec 16 12:24:39.457159 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Dec 16 12:24:39.457249 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 16 12:24:39.457353 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Dec 16 12:24:39.457441 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Dec 16 12:24:39.457531 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Dec 16 12:24:39.457615 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Dec 16 12:24:39.457698 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Dec 16 12:24:39.457787 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Dec 16 12:24:39.457872 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Dec 16 12:24:39.457958 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Dec 16 12:24:39.458045 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 16 12:24:39.458130 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Dec 16 12:24:39.458214 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Dec 16 12:24:39.458315 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 16 12:24:39.458407 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Dec 16 12:24:39.458493 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Dec 16 12:24:39.458582 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 16 12:24:39.458668 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Dec 16 12:24:39.458777 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Dec 16 12:24:39.458874 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 16 12:24:39.458960 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Dec 16 12:24:39.459046 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Dec 16 12:24:39.459134 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 16 12:24:39.459218 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Dec 16 12:24:39.459329 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Dec 16 12:24:39.459426 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Dec 16 12:24:39.459518 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Dec 16 12:24:39.459604 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Dec 16 12:24:39.459687 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Dec 16 12:24:39.459774 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Dec 16 12:24:39.459857 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Dec 16 12:24:39.459943 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Dec 16 12:24:39.460026 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Dec 16 12:24:39.460118 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Dec 16 12:24:39.460200 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Dec 16 12:24:39.460286 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Dec 16 12:24:39.460390 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Dec 16 12:24:39.460479 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Dec 16 12:24:39.460563 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Dec 16 12:24:39.460654 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Dec 16 12:24:39.460738 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Dec 16 12:24:39.460823 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Dec 16 12:24:39.460909 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Dec 16 12:24:39.460995 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Dec 16 12:24:39.461077 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Dec 16 12:24:39.461168 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Dec 16 12:24:39.461252 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Dec 16 12:24:39.461353 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Dec 16 12:24:39.461438 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Dec 16 12:24:39.461525 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Dec 16 12:24:39.461608 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Dec 16 12:24:39.461694 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Dec 16 12:24:39.461780 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Dec 16 12:24:39.461865 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Dec 16 12:24:39.461947 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Dec 16 12:24:39.462032 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Dec 16 12:24:39.462114 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Dec 16 12:24:39.462199 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Dec 16 12:24:39.462285 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Dec 16 12:24:39.462386 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Dec 16 12:24:39.462470 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Dec 16 12:24:39.462555 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Dec 16 12:24:39.462639 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Dec 16 12:24:39.462737 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Dec 16 12:24:39.462833 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Dec 16 12:24:39.462920 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Dec 16 12:24:39.463006 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Dec 16 12:24:39.463093 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Dec 16 12:24:39.463176 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Dec 16 12:24:39.463260 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Dec 16 12:24:39.463361 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Dec 16 12:24:39.463456 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Dec 16 12:24:39.463539 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Dec 16 12:24:39.463625 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Dec 16 12:24:39.463707 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Dec 16 12:24:39.463793 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Dec 16 12:24:39.463875 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Dec 16 12:24:39.463965 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Dec 16 12:24:39.464049 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Dec 16 12:24:39.464135 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Dec 16 12:24:39.464218 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Dec 16 12:24:39.464318 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Dec 16 12:24:39.464405 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Dec 16 12:24:39.464496 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Dec 16 12:24:39.464584 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Dec 16 12:24:39.464671 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Dec 16 12:24:39.464756 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Dec 16 12:24:39.464844 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Dec 16 12:24:39.464928 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Dec 16 12:24:39.465015 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Dec 16 12:24:39.465101 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Dec 16 12:24:39.465188 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Dec 16 12:24:39.465272 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 12:24:39.465385 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Dec 16 12:24:39.465471 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 12:24:39.465556 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Dec 16 12:24:39.465644 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 12:24:39.465731 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Dec 16 12:24:39.465815 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 12:24:39.465901 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Dec 16 12:24:39.465984 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 12:24:39.466076 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Dec 16 12:24:39.466164 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 12:24:39.466252 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Dec 16 12:24:39.466368 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 12:24:39.466460 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Dec 16 12:24:39.466546 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 12:24:39.466632 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Dec 16 12:24:39.466716 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Dec 16 12:24:39.466825 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Dec 16 12:24:39.466910 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Dec 16 12:24:39.466995 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Dec 16 12:24:39.467079 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Dec 16 12:24:39.467163 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Dec 16 12:24:39.467246 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Dec 16 12:24:39.467360 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Dec 16 12:24:39.467456 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Dec 16 12:24:39.467546 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Dec 16 12:24:39.467637 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Dec 16 12:24:39.467724 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Dec 16 12:24:39.467812 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Dec 16 12:24:39.467898 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Dec 16 12:24:39.467981 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.468079 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.468164 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Dec 16 12:24:39.468247 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.468346 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.468444 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Dec 16 12:24:39.468529 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.468613 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.468697 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Dec 16 12:24:39.468780 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.468866 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.468953 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Dec 16 12:24:39.469037 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.469119 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.469208 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Dec 16 12:24:39.469307 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.469408 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.469505 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Dec 16 12:24:39.469590 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.469675 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.469760 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Dec 16 12:24:39.469843 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.469926 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.470013 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Dec 16 12:24:39.470096 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.470178 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.470263 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Dec 16 12:24:39.470361 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.470448 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.470539 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Dec 16 12:24:39.470631 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.470717 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.470820 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Dec 16 12:24:39.470908 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.470993 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.471081 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Dec 16 12:24:39.471172 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.471259 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.471368 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Dec 16 12:24:39.471458 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.471543 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.471631 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Dec 16 12:24:39.471720 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.471805 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.471893 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Dec 16 12:24:39.471978 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.472065 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.472155 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Dec 16 12:24:39.472240 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.472355 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.472449 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Dec 16 12:24:39.472533 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.472616 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.472699 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 12:24:39.472783 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Dec 16 12:24:39.472870 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Dec 16 12:24:39.472953 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Dec 16 12:24:39.473036 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 12:24:39.473124 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Dec 16 12:24:39.473216 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Dec 16 12:24:39.473312 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Dec 16 12:24:39.473404 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Dec 16 12:24:39.473494 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Dec 16 12:24:39.473592 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Dec 16 12:24:39.473683 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Dec 16 12:24:39.473771 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Dec 16 12:24:39.473861 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Dec 16 12:24:39.473947 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Dec 16 12:24:39.474034 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.474123 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.474209 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.474302 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.474392 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.474477 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.474566 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.474651 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.474759 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.474850 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.474936 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.475022 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.475108 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.475192 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.475281 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.475390 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.475481 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.475565 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.475653 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.475739 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.475825 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.475914 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.476000 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.476085 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.476173 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.476259 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.476362 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.476458 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.476558 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.476646 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.476734 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.476816 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.476904 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.476987 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.477072 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 12:24:39.477154 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Dec 16 12:24:39.477248 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Dec 16 12:24:39.477348 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:24:39.477438 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Dec 16 12:24:39.477529 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 16 12:24:39.477619 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Dec 16 12:24:39.477708 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 16 12:24:39.477803 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Dec 16 12:24:39.477893 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 16 12:24:39.477980 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Dec 16 12:24:39.478065 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 16 12:24:39.478169 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Dec 16 12:24:39.478279 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Dec 16 12:24:39.478381 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 16 12:24:39.478467 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Dec 16 12:24:39.478553 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 16 12:24:39.478643 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Dec 16 12:24:39.478745 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 16 12:24:39.478837 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Dec 16 12:24:39.478921 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 16 12:24:39.479015 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Dec 16 12:24:39.479100 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Dec 16 12:24:39.479184 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 16 12:24:39.479267 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Dec 16 12:24:39.479366 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 16 12:24:39.479460 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Dec 16 12:24:39.479546 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Dec 16 12:24:39.479633 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 16 12:24:39.479719 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 16 12:24:39.479804 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 16 12:24:39.479890 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 16 12:24:39.479974 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 16 12:24:39.480057 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 16 12:24:39.480144 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 16 12:24:39.480227 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 16 12:24:39.480329 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 16 12:24:39.480419 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 16 12:24:39.480505 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Dec 16 12:24:39.480592 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 16 12:24:39.480678 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 16 12:24:39.480761 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Dec 16 12:24:39.480844 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Dec 16 12:24:39.480929 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 16 12:24:39.481012 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Dec 16 12:24:39.481098 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Dec 16 12:24:39.481185 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 16 12:24:39.481271 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Dec 16 12:24:39.481368 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Dec 16 12:24:39.481457 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 16 12:24:39.481547 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Dec 16 12:24:39.481635 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Dec 16 12:24:39.481722 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 16 12:24:39.481806 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Dec 16 12:24:39.481889 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 16 12:24:39.481979 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 16 12:24:39.482065 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Dec 16 12:24:39.482150 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 16 12:24:39.482236 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 16 12:24:39.482334 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Dec 16 12:24:39.482421 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 16 12:24:39.482511 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 16 12:24:39.482595 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Dec 16 12:24:39.482679 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Dec 16 12:24:39.482779 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 16 12:24:39.482866 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Dec 16 12:24:39.482952 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Dec 16 12:24:39.483041 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 16 12:24:39.483124 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Dec 16 12:24:39.483207 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Dec 16 12:24:39.483304 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Dec 16 12:24:39.483399 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 16 12:24:39.483485 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Dec 16 12:24:39.483575 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Dec 16 12:24:39.483658 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Dec 16 12:24:39.483744 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 16 12:24:39.483828 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Dec 16 12:24:39.483913 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Dec 16 12:24:39.483996 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Dec 16 12:24:39.484084 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 16 12:24:39.484169 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Dec 16 12:24:39.484254 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Dec 16 12:24:39.484352 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 16 12:24:39.484441 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 16 12:24:39.484525 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Dec 16 12:24:39.484608 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Dec 16 12:24:39.484692 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 16 12:24:39.484783 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 16 12:24:39.484867 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Dec 16 12:24:39.484949 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Dec 16 12:24:39.485032 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 16 12:24:39.485118 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 16 12:24:39.485201 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Dec 16 12:24:39.485300 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Dec 16 12:24:39.485390 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Dec 16 12:24:39.485478 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 16 12:24:39.485565 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Dec 16 12:24:39.485648 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Dec 16 12:24:39.485731 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Dec 16 12:24:39.485817 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 16 12:24:39.485906 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Dec 16 12:24:39.485990 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Dec 16 12:24:39.486075 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Dec 16 12:24:39.486164 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 16 12:24:39.486247 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Dec 16 12:24:39.486340 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Dec 16 12:24:39.486423 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Dec 16 12:24:39.486514 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 16 12:24:39.486599 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Dec 16 12:24:39.486684 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Dec 16 12:24:39.486779 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Dec 16 12:24:39.486867 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 16 12:24:39.486952 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Dec 16 12:24:39.487037 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Dec 16 12:24:39.487120 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 16 12:24:39.487207 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 16 12:24:39.487302 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Dec 16 12:24:39.487391 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Dec 16 12:24:39.487475 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 16 12:24:39.487562 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 16 12:24:39.487649 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Dec 16 12:24:39.487731 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Dec 16 12:24:39.487813 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 16 12:24:39.487900 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 16 12:24:39.487984 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Dec 16 12:24:39.488066 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Dec 16 12:24:39.488148 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Dec 16 12:24:39.488235 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:24:39.488328 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:24:39.488406 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:24:39.488496 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 16 12:24:39.488575 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 16 12:24:39.488667 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 16 12:24:39.488745 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 16 12:24:39.488831 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 16 12:24:39.488911 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 16 12:24:39.488996 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 16 12:24:39.489079 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 16 12:24:39.489167 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 16 12:24:39.489245 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 16 12:24:39.489349 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 16 12:24:39.489431 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 16 12:24:39.489516 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 16 12:24:39.489597 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 16 12:24:39.489683 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 16 12:24:39.489760 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 16 12:24:39.489844 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 16 12:24:39.489921 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 16 12:24:39.490008 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Dec 16 12:24:39.490085 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Dec 16 12:24:39.490170 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Dec 16 12:24:39.490247 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Dec 16 12:24:39.490355 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Dec 16 12:24:39.490441 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Dec 16 12:24:39.490539 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Dec 16 12:24:39.490620 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Dec 16 12:24:39.490705 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Dec 16 12:24:39.490801 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 16 12:24:39.490901 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Dec 16 12:24:39.490980 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 16 12:24:39.491064 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Dec 16 12:24:39.491143 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 16 12:24:39.491228 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Dec 16 12:24:39.491330 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Dec 16 12:24:39.491422 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Dec 16 12:24:39.491501 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Dec 16 12:24:39.491587 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Dec 16 12:24:39.491667 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Dec 16 12:24:39.491749 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Dec 16 12:24:39.491838 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Dec 16 12:24:39.491916 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Dec 16 12:24:39.491993 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Dec 16 12:24:39.492077 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Dec 16 12:24:39.492155 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Dec 16 12:24:39.492235 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Dec 16 12:24:39.492341 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Dec 16 12:24:39.492426 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Dec 16 12:24:39.492510 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 16 12:24:39.492597 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Dec 16 12:24:39.492681 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Dec 16 12:24:39.492758 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 16 12:24:39.492851 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Dec 16 12:24:39.492928 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Dec 16 12:24:39.493005 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 16 12:24:39.493089 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Dec 16 12:24:39.493168 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Dec 16 12:24:39.493245 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Dec 16 12:24:39.493367 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Dec 16 12:24:39.493451 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Dec 16 12:24:39.493528 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Dec 16 12:24:39.493614 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 16 12:24:39.493696 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Dec 16 12:24:39.493773 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Dec 16 12:24:39.493867 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Dec 16 12:24:39.493947 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Dec 16 12:24:39.494025 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Dec 16 12:24:39.494114 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Dec 16 12:24:39.494192 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Dec 16 12:24:39.494271 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Dec 16 12:24:39.494377 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Dec 16 12:24:39.494458 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Dec 16 12:24:39.494537 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 16 12:24:39.494625 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Dec 16 12:24:39.494703 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Dec 16 12:24:39.494799 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 16 12:24:39.494898 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Dec 16 12:24:39.494981 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Dec 16 12:24:39.495067 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 16 12:24:39.495171 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Dec 16 12:24:39.495251 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Dec 16 12:24:39.495351 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Dec 16 12:24:39.495364 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:24:39.495373 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:24:39.495382 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:24:39.495394 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:24:39.495403 kernel: iommu: Default domain type: Translated Dec 16 12:24:39.495411 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:24:39.495419 kernel: efivars: Registered efivars operations Dec 16 12:24:39.495428 kernel: vgaarb: loaded Dec 16 12:24:39.495436 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:24:39.495445 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:24:39.495454 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:24:39.495463 kernel: pnp: PnP ACPI init Dec 16 12:24:39.495565 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:24:39.495578 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:24:39.495586 kernel: NET: Registered PF_INET protocol family Dec 16 12:24:39.495594 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:24:39.495605 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 16 12:24:39.495613 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:24:39.495622 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 12:24:39.495630 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 12:24:39.495638 kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 16 12:24:39.495647 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 16 12:24:39.495655 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 16 12:24:39.495665 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:24:39.495760 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 16 12:24:39.495773 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:24:39.495781 kernel: kvm [1]: HYP mode not available Dec 16 12:24:39.495790 kernel: Initialise system trusted keyrings Dec 16 12:24:39.495798 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Dec 16 12:24:39.495806 kernel: Key type asymmetric registered Dec 16 12:24:39.495816 kernel: Asymmetric key parser 'x509' registered Dec 16 12:24:39.495825 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:24:39.495833 kernel: io scheduler mq-deadline registered Dec 16 12:24:39.495841 kernel: io scheduler kyber registered Dec 16 12:24:39.495850 kernel: io scheduler bfq registered Dec 16 12:24:39.495858 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:24:39.495946 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Dec 16 12:24:39.496030 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Dec 16 12:24:39.496115 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.496201 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Dec 16 12:24:39.496306 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Dec 16 12:24:39.496394 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.496486 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Dec 16 12:24:39.496570 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Dec 16 12:24:39.496658 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.496754 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Dec 16 12:24:39.496842 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Dec 16 12:24:39.496925 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.497010 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Dec 16 12:24:39.497093 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Dec 16 12:24:39.497177 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.497264 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Dec 16 12:24:39.497418 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Dec 16 12:24:39.497508 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.497596 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Dec 16 12:24:39.497681 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Dec 16 12:24:39.497764 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.497856 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Dec 16 12:24:39.497940 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Dec 16 12:24:39.498023 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.498035 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 16 12:24:39.498118 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Dec 16 12:24:39.498202 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Dec 16 12:24:39.498305 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.498398 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Dec 16 12:24:39.498484 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Dec 16 12:24:39.498571 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.498659 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Dec 16 12:24:39.498758 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Dec 16 12:24:39.498848 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.498938 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Dec 16 12:24:39.499027 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Dec 16 12:24:39.499111 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.499198 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Dec 16 12:24:39.499310 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Dec 16 12:24:39.499423 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.499527 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Dec 16 12:24:39.499614 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Dec 16 12:24:39.499697 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.499783 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Dec 16 12:24:39.499868 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Dec 16 12:24:39.499958 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.500050 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Dec 16 12:24:39.500135 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Dec 16 12:24:39.500220 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.500232 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 16 12:24:39.500350 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Dec 16 12:24:39.500440 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Dec 16 12:24:39.500527 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.500615 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Dec 16 12:24:39.500697 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Dec 16 12:24:39.500782 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.500870 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Dec 16 12:24:39.500953 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Dec 16 12:24:39.501035 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.501125 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Dec 16 12:24:39.501208 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Dec 16 12:24:39.501313 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.501425 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Dec 16 12:24:39.501512 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Dec 16 12:24:39.501599 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.501693 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Dec 16 12:24:39.501777 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Dec 16 12:24:39.501863 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.501957 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Dec 16 12:24:39.502042 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Dec 16 12:24:39.502124 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.502215 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Dec 16 12:24:39.502312 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Dec 16 12:24:39.502400 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.502411 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 16 12:24:39.502496 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Dec 16 12:24:39.502580 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Dec 16 12:24:39.502663 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.502770 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Dec 16 12:24:39.502857 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Dec 16 12:24:39.502940 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.503027 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Dec 16 12:24:39.503112 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Dec 16 12:24:39.503196 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.503300 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Dec 16 12:24:39.503409 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Dec 16 12:24:39.503495 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.503583 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Dec 16 12:24:39.503667 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Dec 16 12:24:39.503752 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.503843 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Dec 16 12:24:39.503927 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Dec 16 12:24:39.504010 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.504096 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Dec 16 12:24:39.504179 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Dec 16 12:24:39.504262 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.504514 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Dec 16 12:24:39.504605 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Dec 16 12:24:39.504690 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.504780 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Dec 16 12:24:39.504864 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Dec 16 12:24:39.504950 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:24:39.504961 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:24:39.504974 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:24:39.505065 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Dec 16 12:24:39.505158 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 16 12:24:39.505169 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:24:39.505178 kernel: thunder_xcv, ver 1.0 Dec 16 12:24:39.505186 kernel: thunder_bgx, ver 1.0 Dec 16 12:24:39.505194 kernel: nicpf, ver 1.0 Dec 16 12:24:39.505205 kernel: nicvf, ver 1.0 Dec 16 12:24:39.505338 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:24:39.505437 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:24:38 UTC (1765887878) Dec 16 12:24:39.505448 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:24:39.505457 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:24:39.505465 kernel: watchdog: NMI not fully supported Dec 16 12:24:39.505477 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:24:39.505485 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:24:39.505494 kernel: Segment Routing with IPv6 Dec 16 12:24:39.505502 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:24:39.505510 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:24:39.505518 kernel: Key type dns_resolver registered Dec 16 12:24:39.505526 kernel: registered taskstats version 1 Dec 16 12:24:39.505536 kernel: Loading compiled-in X.509 certificates Dec 16 12:24:39.505544 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 16 12:24:39.505552 kernel: Demotion targets for Node 0: null Dec 16 12:24:39.505560 kernel: Key type .fscrypt registered Dec 16 12:24:39.505568 kernel: Key type fscrypt-provisioning registered Dec 16 12:24:39.505576 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:24:39.505585 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:24:39.505593 kernel: ima: No architecture policies found Dec 16 12:24:39.505602 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:24:39.505610 kernel: clk: Disabling unused clocks Dec 16 12:24:39.505619 kernel: PM: genpd: Disabling unused power domains Dec 16 12:24:39.505627 kernel: Freeing unused kernel memory: 12416K Dec 16 12:24:39.505635 kernel: Run /init as init process Dec 16 12:24:39.505643 kernel: with arguments: Dec 16 12:24:39.505651 kernel: /init Dec 16 12:24:39.505660 kernel: with environment: Dec 16 12:24:39.505668 kernel: HOME=/ Dec 16 12:24:39.505677 kernel: TERM=linux Dec 16 12:24:39.505685 kernel: ACPI: bus type USB registered Dec 16 12:24:39.505693 kernel: usbcore: registered new interface driver usbfs Dec 16 12:24:39.505702 kernel: usbcore: registered new interface driver hub Dec 16 12:24:39.505710 kernel: usbcore: registered new device driver usb Dec 16 12:24:39.505805 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 12:24:39.505894 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 16 12:24:39.505982 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 12:24:39.506067 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 12:24:39.506156 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 16 12:24:39.506241 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 16 12:24:39.506378 kernel: hub 1-0:1.0: USB hub found Dec 16 12:24:39.506488 kernel: hub 1-0:1.0: 4 ports detected Dec 16 12:24:39.506603 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 12:24:39.506712 kernel: hub 2-0:1.0: USB hub found Dec 16 12:24:39.506827 kernel: hub 2-0:1.0: 4 ports detected Dec 16 12:24:39.506934 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 12:24:39.507027 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Dec 16 12:24:39.507039 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:24:39.507048 kernel: GPT:25804799 != 104857599 Dec 16 12:24:39.507056 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:24:39.507065 kernel: GPT:25804799 != 104857599 Dec 16 12:24:39.507075 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:24:39.507084 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:24:39.507092 kernel: SCSI subsystem initialized Dec 16 12:24:39.507101 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:24:39.507110 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:24:39.507118 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:24:39.507127 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:24:39.507137 kernel: raid6: neonx8 gen() 15725 MB/s Dec 16 12:24:39.507146 kernel: raid6: neonx4 gen() 15679 MB/s Dec 16 12:24:39.507155 kernel: raid6: neonx2 gen() 13179 MB/s Dec 16 12:24:39.507163 kernel: raid6: neonx1 gen() 10489 MB/s Dec 16 12:24:39.507172 kernel: raid6: int64x8 gen() 6829 MB/s Dec 16 12:24:39.507180 kernel: raid6: int64x4 gen() 7353 MB/s Dec 16 12:24:39.507188 kernel: raid6: int64x2 gen() 6117 MB/s Dec 16 12:24:39.507197 kernel: raid6: int64x1 gen() 5036 MB/s Dec 16 12:24:39.507207 kernel: raid6: using algorithm neonx8 gen() 15725 MB/s Dec 16 12:24:39.507215 kernel: raid6: .... xor() 12073 MB/s, rmw enabled Dec 16 12:24:39.507224 kernel: raid6: using neon recovery algorithm Dec 16 12:24:39.507233 kernel: xor: measuring software checksum speed Dec 16 12:24:39.507243 kernel: 8regs : 21590 MB/sec Dec 16 12:24:39.507251 kernel: 32regs : 21687 MB/sec Dec 16 12:24:39.507261 kernel: arm64_neon : 28128 MB/sec Dec 16 12:24:39.507271 kernel: xor: using function: arm64_neon (28128 MB/sec) Dec 16 12:24:39.507280 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:24:39.507439 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 12:24:39.507456 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (275) Dec 16 12:24:39.507465 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 16 12:24:39.507478 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:24:39.507487 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:24:39.507495 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:24:39.507504 kernel: loop: module loaded Dec 16 12:24:39.507512 kernel: loop0: detected capacity change from 0 to 91480 Dec 16 12:24:39.507521 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:24:39.507628 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 16 12:24:39.507644 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:24:39.507656 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:24:39.507665 systemd[1]: Detected virtualization kvm. Dec 16 12:24:39.507674 systemd[1]: Detected architecture arm64. Dec 16 12:24:39.507683 systemd[1]: Running in initrd. Dec 16 12:24:39.507692 systemd[1]: No hostname configured, using default hostname. Dec 16 12:24:39.507703 systemd[1]: Hostname set to . Dec 16 12:24:39.507711 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:24:39.507720 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:24:39.507729 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:24:39.507738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:24:39.507747 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:24:39.507758 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:24:39.507767 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:24:39.507777 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:24:39.507786 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:24:39.507795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:24:39.507804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:24:39.507814 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:24:39.507824 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:24:39.507832 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:24:39.507841 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:24:39.507850 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:24:39.507859 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:24:39.507869 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:24:39.507878 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:24:39.507887 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:24:39.507896 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:24:39.507905 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:24:39.507913 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:24:39.507922 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:24:39.507933 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:24:39.507942 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:24:39.507951 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:24:39.507959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:24:39.507968 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:24:39.507977 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:24:39.507986 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:24:39.507997 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:24:39.508006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:24:39.508016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:24:39.508026 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:24:39.508036 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:24:39.508045 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:24:39.508054 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:24:39.508090 systemd-journald[417]: Collecting audit messages is enabled. Dec 16 12:24:39.508115 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:24:39.508125 kernel: Bridge firewalling registered Dec 16 12:24:39.508134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:24:39.508146 kernel: audit: type=1130 audit(1765887879.435:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.508155 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:24:39.508165 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:24:39.508175 kernel: audit: type=1130 audit(1765887879.445:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.508184 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:24:39.508193 kernel: audit: type=1130 audit(1765887879.452:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.508202 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:24:39.508212 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:24:39.508223 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:24:39.508232 kernel: audit: type=1130 audit(1765887879.470:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.508241 kernel: audit: type=1334 audit(1765887879.473:6): prog-id=6 op=LOAD Dec 16 12:24:39.508249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:24:39.508259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:24:39.508268 kernel: audit: type=1130 audit(1765887879.480:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.508277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:24:39.508304 kernel: audit: type=1130 audit(1765887879.491:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.508316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:24:39.508326 systemd-journald[417]: Journal started Dec 16 12:24:39.508346 systemd-journald[417]: Runtime Journal (/run/log/journal/afb28e88a5cd4fb1a44b2602bcfac929) is 8M, max 319.5M, 311.5M free. Dec 16 12:24:39.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.473000 audit: BPF prog-id=6 op=LOAD Dec 16 12:24:39.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.432362 systemd-modules-load[418]: Inserted module 'br_netfilter' Dec 16 12:24:39.510893 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:24:39.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.516301 kernel: audit: type=1130 audit(1765887879.510:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.515029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:24:39.518441 dracut-cmdline[451]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:24:39.536163 systemd-tmpfiles[469]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:24:39.537632 systemd-resolved[437]: Positive Trust Anchors: Dec 16 12:24:39.537642 systemd-resolved[437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:24:39.537645 systemd-resolved[437]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:24:39.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.537675 systemd-resolved[437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:24:39.552745 kernel: audit: type=1130 audit(1765887879.540:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.539852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:24:39.566023 systemd-resolved[437]: Defaulting to hostname 'linux'. Dec 16 12:24:39.566979 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:24:39.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.568073 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:24:39.615337 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:24:39.626318 kernel: iscsi: registered transport (tcp) Dec 16 12:24:39.641322 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:24:39.641348 kernel: QLogic iSCSI HBA Driver Dec 16 12:24:39.665517 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:24:39.693468 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:24:39.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.696021 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:24:39.744674 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:24:39.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.747209 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:24:39.750460 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:24:39.793628 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:24:39.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.795000 audit: BPF prog-id=7 op=LOAD Dec 16 12:24:39.795000 audit: BPF prog-id=8 op=LOAD Dec 16 12:24:39.796032 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:24:39.831539 systemd-udevd[700]: Using default interface naming scheme 'v257'. Dec 16 12:24:39.839673 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:24:39.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.844457 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:24:39.868206 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:24:39.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.870000 audit: BPF prog-id=9 op=LOAD Dec 16 12:24:39.871058 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:24:39.873000 dracut-pre-trigger[775]: rd.md=0: removing MD RAID activation Dec 16 12:24:39.896931 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:24:39.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.899473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:24:39.922078 systemd-networkd[812]: lo: Link UP Dec 16 12:24:39.922087 systemd-networkd[812]: lo: Gained carrier Dec 16 12:24:39.922861 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:24:39.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.924264 systemd[1]: Reached target network.target - Network. Dec 16 12:24:39.985102 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:24:39.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:39.989677 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:24:40.074445 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 16 12:24:40.084320 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 16 12:24:40.094963 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:24:40.110865 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:24:40.128320 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 16 12:24:40.127043 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:24:40.132556 systemd-networkd[812]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:24:40.132571 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:24:40.133770 systemd-networkd[812]: eth0: Link UP Dec 16 12:24:40.133933 systemd-networkd[812]: eth0: Gained carrier Dec 16 12:24:40.133944 systemd-networkd[812]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:24:40.136461 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:24:40.141203 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:24:40.142550 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:24:40.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:40.142653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:24:40.144340 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:24:40.152256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:24:40.157045 disk-uuid[882]: Primary Header is updated. Dec 16 12:24:40.157045 disk-uuid[882]: Secondary Entries is updated. Dec 16 12:24:40.157045 disk-uuid[882]: Secondary Header is updated. Dec 16 12:24:40.183271 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 16 12:24:40.185574 kernel: usbcore: registered new interface driver usbhid Dec 16 12:24:40.185591 kernel: usbhid: USB HID core driver Dec 16 12:24:40.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:40.187794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:24:40.222510 systemd-networkd[812]: eth0: DHCPv4 address 10.0.21.226/25, gateway 10.0.21.129 acquired from 10.0.21.129 Dec 16 12:24:40.244386 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:24:40.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:40.245874 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:24:40.248371 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:24:40.250704 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:24:40.253542 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:24:40.283125 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:24:40.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.202837 disk-uuid[883]: Warning: The kernel is still using the old partition table. Dec 16 12:24:41.202837 disk-uuid[883]: The new table will be used at the next reboot or after you Dec 16 12:24:41.202837 disk-uuid[883]: run partprobe(8) or kpartx(8) Dec 16 12:24:41.202837 disk-uuid[883]: The operation has completed successfully. Dec 16 12:24:41.211352 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:24:41.212322 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:24:41.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.214742 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:24:41.259328 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (915) Dec 16 12:24:41.261305 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:24:41.261339 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:24:41.269316 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:24:41.269362 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:24:41.275495 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:24:41.275589 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:24:41.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.278694 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:24:41.460383 ignition[934]: Ignition 2.22.0 Dec 16 12:24:41.460396 ignition[934]: Stage: fetch-offline Dec 16 12:24:41.460433 ignition[934]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:24:41.460442 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:24:41.460605 ignition[934]: parsed url from cmdline: "" Dec 16 12:24:41.460608 ignition[934]: no config URL provided Dec 16 12:24:41.460613 ignition[934]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:24:41.460620 ignition[934]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:24:41.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.465373 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:24:41.460625 ignition[934]: failed to fetch config: resource requires networking Dec 16 12:24:41.468162 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:24:41.460777 ignition[934]: Ignition finished successfully Dec 16 12:24:41.501182 ignition[945]: Ignition 2.22.0 Dec 16 12:24:41.501203 ignition[945]: Stage: fetch Dec 16 12:24:41.501369 ignition[945]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:24:41.501378 ignition[945]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:24:41.501462 ignition[945]: parsed url from cmdline: "" Dec 16 12:24:41.501465 ignition[945]: no config URL provided Dec 16 12:24:41.501469 ignition[945]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:24:41.501475 ignition[945]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:24:41.502091 ignition[945]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 12:24:41.502106 ignition[945]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 12:24:41.502145 ignition[945]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 12:24:41.533667 systemd-networkd[812]: eth0: Gained IPv6LL Dec 16 12:24:41.787605 ignition[945]: GET result: OK Dec 16 12:24:41.787858 ignition[945]: parsing config with SHA512: 73542cf25487fe383485f1eb7f34ceb89596fb8977d432f582552f92589dcc990b50b9398e1ef019e87154b554bc2d21f432779e75a102db1192e2cafefda6a8 Dec 16 12:24:41.792967 unknown[945]: fetched base config from "system" Dec 16 12:24:41.792979 unknown[945]: fetched base config from "system" Dec 16 12:24:41.793311 ignition[945]: fetch: fetch complete Dec 16 12:24:41.792984 unknown[945]: fetched user config from "openstack" Dec 16 12:24:41.798373 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 16 12:24:41.798397 kernel: audit: type=1130 audit(1765887881.796:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.793316 ignition[945]: fetch: fetch passed Dec 16 12:24:41.795501 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:24:41.793357 ignition[945]: Ignition finished successfully Dec 16 12:24:41.797885 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:24:41.832998 ignition[953]: Ignition 2.22.0 Dec 16 12:24:41.833021 ignition[953]: Stage: kargs Dec 16 12:24:41.833159 ignition[953]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:24:41.833167 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:24:41.833917 ignition[953]: kargs: kargs passed Dec 16 12:24:41.836342 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:24:41.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.833963 ignition[953]: Ignition finished successfully Dec 16 12:24:41.842234 kernel: audit: type=1130 audit(1765887881.837:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.839110 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:24:41.868367 ignition[961]: Ignition 2.22.0 Dec 16 12:24:41.868382 ignition[961]: Stage: disks Dec 16 12:24:41.868540 ignition[961]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:24:41.868548 ignition[961]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:24:41.869313 ignition[961]: disks: disks passed Dec 16 12:24:41.869360 ignition[961]: Ignition finished successfully Dec 16 12:24:41.873588 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:24:41.877365 kernel: audit: type=1130 audit(1765887881.874:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.874986 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:24:41.878226 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:24:41.880144 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:24:41.881937 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:24:41.883474 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:24:41.885951 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:24:41.936471 systemd-fsck[971]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 12:24:41.938877 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:24:41.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:41.941608 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:24:41.945244 kernel: audit: type=1130 audit(1765887881.940:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:42.048330 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 16 12:24:42.049221 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:24:42.050646 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:24:42.060103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:24:42.062141 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:24:42.063602 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:24:42.064233 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 12:24:42.066511 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:24:42.066546 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:24:42.088476 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:24:42.090675 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:24:42.107323 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (979) Dec 16 12:24:42.110902 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:24:42.110958 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:24:42.117880 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:24:42.117940 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:24:42.119155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:24:42.177529 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:24:42.182799 initrd-setup-root[1007]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:24:42.189167 initrd-setup-root[1014]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:24:42.194299 initrd-setup-root[1021]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:24:42.197601 initrd-setup-root[1028]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:24:42.301744 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:24:42.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:42.304097 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:24:42.307457 kernel: audit: type=1130 audit(1765887882.302:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:42.307422 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:24:42.328179 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:24:42.330323 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:24:42.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:42.353333 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:24:42.357329 kernel: audit: type=1130 audit(1765887882.353:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:42.361756 ignition[1096]: INFO : Ignition 2.22.0 Dec 16 12:24:42.361756 ignition[1096]: INFO : Stage: mount Dec 16 12:24:42.363637 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:24:42.363637 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:24:42.365565 ignition[1096]: INFO : mount: mount passed Dec 16 12:24:42.365565 ignition[1096]: INFO : Ignition finished successfully Dec 16 12:24:42.366457 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:24:42.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:42.371320 kernel: audit: type=1130 audit(1765887882.367:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:43.238774 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:24:45.248364 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:24:49.256357 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:24:49.263940 coreos-metadata[981]: Dec 16 12:24:49.263 WARN failed to locate config-drive, using the metadata service API instead Dec 16 12:24:49.282530 coreos-metadata[981]: Dec 16 12:24:49.282 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 12:24:51.680181 coreos-metadata[981]: Dec 16 12:24:51.680 INFO Fetch successful Dec 16 12:24:51.681407 coreos-metadata[981]: Dec 16 12:24:51.680 INFO wrote hostname ci-4515-1-0-7-179ea8c226 to /sysroot/etc/hostname Dec 16 12:24:51.683743 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 12:24:51.690505 kernel: audit: type=1130 audit(1765887891.684:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:51.690530 kernel: audit: type=1131 audit(1765887891.684:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:51.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:51.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:51.683854 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 12:24:51.686058 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:24:51.708567 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:24:51.738320 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (1114) Dec 16 12:24:51.741312 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:24:51.741339 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:24:51.745312 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:24:51.745341 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:24:51.746793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:24:51.779781 ignition[1133]: INFO : Ignition 2.22.0 Dec 16 12:24:51.779781 ignition[1133]: INFO : Stage: files Dec 16 12:24:51.781444 ignition[1133]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:24:51.781444 ignition[1133]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:24:51.781444 ignition[1133]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:24:51.784483 ignition[1133]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:24:51.784483 ignition[1133]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:24:51.787081 ignition[1133]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:24:51.787081 ignition[1133]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:24:51.789577 ignition[1133]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:24:51.787462 unknown[1133]: wrote ssh authorized keys file for user: core Dec 16 12:24:51.791800 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:24:51.791800 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:24:51.846140 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:24:52.134163 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:24:52.134163 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:24:52.138206 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 12:24:52.247880 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:24:52.774593 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:24:52.776710 ignition[1133]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:24:52.776710 ignition[1133]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:24:52.779930 ignition[1133]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:24:52.779930 ignition[1133]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:24:52.779930 ignition[1133]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:24:52.779930 ignition[1133]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:24:52.788842 kernel: audit: type=1130 audit(1765887892.783:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.788912 ignition[1133]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:24:52.788912 ignition[1133]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:24:52.788912 ignition[1133]: INFO : files: files passed Dec 16 12:24:52.788912 ignition[1133]: INFO : Ignition finished successfully Dec 16 12:24:52.781740 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:24:52.784846 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:24:52.789601 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:24:52.807505 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:24:52.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.807605 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:24:52.815073 kernel: audit: type=1130 audit(1765887892.808:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.815101 kernel: audit: type=1131 audit(1765887892.808:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.822254 initrd-setup-root-after-ignition[1165]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:24:52.822254 initrd-setup-root-after-ignition[1165]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:24:52.825602 initrd-setup-root-after-ignition[1169]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:24:52.826470 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:24:52.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.828347 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:24:52.833276 kernel: audit: type=1130 audit(1765887892.827:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.833342 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:24:52.893505 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:24:52.893633 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:24:52.900669 kernel: audit: type=1130 audit(1765887892.895:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.900699 kernel: audit: type=1131 audit(1765887892.895:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.895659 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:24:52.901516 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:24:52.903379 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:24:52.904368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:24:52.930643 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:24:52.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.933156 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:24:52.937021 kernel: audit: type=1130 audit(1765887892.931:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.960837 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:24:52.961059 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:24:52.963025 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:24:52.964910 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:24:52.966442 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:24:52.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.966576 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:24:52.972018 kernel: audit: type=1131 audit(1765887892.967:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.971106 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:24:52.972935 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:24:52.974402 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:24:52.975998 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:24:52.977691 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:24:52.979378 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:24:52.981242 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:24:52.982954 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:24:52.984650 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:24:52.986279 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:24:52.987885 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:24:52.989179 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:24:52.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.989356 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:24:52.991380 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:24:52.993112 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:24:52.994859 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:24:52.994940 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:24:52.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.996715 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:24:52.996836 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:24:53.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.999344 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:24:53.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:52.999483 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:24:53.001239 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:24:53.001363 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:24:53.003766 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:24:53.006227 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:24:53.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.007855 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:24:53.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.007971 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:24:53.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.009681 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:24:53.009785 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:24:53.011363 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:24:53.011470 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:24:53.016857 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:24:53.019446 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:24:53.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.032049 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:24:53.037136 ignition[1189]: INFO : Ignition 2.22.0 Dec 16 12:24:53.037136 ignition[1189]: INFO : Stage: umount Dec 16 12:24:53.037136 ignition[1189]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:24:53.037136 ignition[1189]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:24:53.037136 ignition[1189]: INFO : umount: umount passed Dec 16 12:24:53.037136 ignition[1189]: INFO : Ignition finished successfully Dec 16 12:24:53.041024 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:24:53.041147 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:24:53.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.043012 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:24:53.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.043105 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:24:53.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.044864 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:24:53.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.044940 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:24:53.045970 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:24:53.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.046014 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:24:53.047390 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:24:53.047439 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:24:53.048828 systemd[1]: Stopped target network.target - Network. Dec 16 12:24:53.050122 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:24:53.050175 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:24:53.051867 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:24:53.053172 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:24:53.056329 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:24:53.058162 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:24:53.059849 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:24:53.061213 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:24:53.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.061259 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:24:53.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.062879 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:24:53.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.062913 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:24:53.064877 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:24:53.064899 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:24:53.066349 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:24:53.066408 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:24:53.067926 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:24:53.067968 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:24:53.069333 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:24:53.069384 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:24:53.071065 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:24:53.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.072565 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:24:53.081119 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:24:53.081265 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:24:53.087000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:24:53.088855 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:24:53.088979 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:24:53.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.092817 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:24:53.093847 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:24:53.095000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:24:53.093898 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:24:53.096491 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:24:53.098190 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:24:53.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.098263 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:24:53.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.100269 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:24:53.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.100335 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:24:53.102007 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:24:53.102052 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:24:53.103980 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:24:53.121773 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:24:53.121929 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:24:53.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.124110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:24:53.124147 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:24:53.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.125173 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:24:53.125203 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:24:53.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.127191 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:24:53.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.127239 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:24:53.129795 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:24:53.129844 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:24:53.132220 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:24:53.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.132268 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:24:53.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.135928 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:24:53.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.136962 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:24:53.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.137025 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:24:53.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.138872 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:24:53.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.138934 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:24:53.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:53.140768 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:24:53.140862 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:24:53.142992 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:24:53.143037 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:24:53.145019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:24:53.145065 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:24:53.147803 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:24:53.147897 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:24:53.149440 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:24:53.149517 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:24:53.151932 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:24:53.154099 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:24:53.173388 systemd[1]: Switching root. Dec 16 12:24:53.213994 systemd-journald[417]: Journal stopped Dec 16 12:24:54.111347 systemd-journald[417]: Received SIGTERM from PID 1 (systemd). Dec 16 12:24:54.111436 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:24:54.111459 kernel: SELinux: policy capability open_perms=1 Dec 16 12:24:54.111473 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:24:54.111483 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:24:54.111495 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:24:54.111506 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:24:54.111521 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:24:54.111535 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:24:54.111546 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:24:54.111556 systemd[1]: Successfully loaded SELinux policy in 61.588ms. Dec 16 12:24:54.111573 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.038ms. Dec 16 12:24:54.111585 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:24:54.111596 systemd[1]: Detected virtualization kvm. Dec 16 12:24:54.111607 systemd[1]: Detected architecture arm64. Dec 16 12:24:54.111620 systemd[1]: Detected first boot. Dec 16 12:24:54.111630 systemd[1]: Hostname set to . Dec 16 12:24:54.111641 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:24:54.111652 zram_generator::config[1235]: No configuration found. Dec 16 12:24:54.111666 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:24:54.111677 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:24:54.111689 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:24:54.111703 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:24:54.111715 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:24:54.111726 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:24:54.111737 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:24:54.111752 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:24:54.111763 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:24:54.111775 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:24:54.111787 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:24:54.111798 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:24:54.111812 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:24:54.111823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:24:54.111834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:24:54.111845 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:24:54.111857 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:24:54.111868 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:24:54.111879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:24:54.111890 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:24:54.111901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:24:54.111912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:24:54.111924 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:24:54.111936 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:24:54.111947 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:24:54.111958 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:24:54.111972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:24:54.111983 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:24:54.111996 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:24:54.112007 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:24:54.112017 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:24:54.112028 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:24:54.112039 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:24:54.112050 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:24:54.112062 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:24:54.112074 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:24:54.112085 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:24:54.112096 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:24:54.112110 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:24:54.112121 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:24:54.112132 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:24:54.112143 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:24:54.112156 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:24:54.112166 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:24:54.112177 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:24:54.112188 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:24:54.112201 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:24:54.112213 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:24:54.112224 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:24:54.112236 systemd[1]: Reached target machines.target - Containers. Dec 16 12:24:54.112247 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:24:54.112258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:24:54.112269 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:24:54.112281 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:24:54.112404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:24:54.112422 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:24:54.112434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:24:54.112445 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:24:54.112456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:24:54.112467 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:24:54.112479 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:24:54.112490 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:24:54.112501 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:24:54.112512 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:24:54.112524 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:24:54.112535 kernel: fuse: init (API version 7.41) Dec 16 12:24:54.112547 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:24:54.112558 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:24:54.112569 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:24:54.112581 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:24:54.112591 kernel: ACPI: bus type drm_connector registered Dec 16 12:24:54.112602 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:24:54.112612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:24:54.112623 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:24:54.112635 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:24:54.112646 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:24:54.112685 systemd-journald[1307]: Collecting audit messages is enabled. Dec 16 12:24:54.112707 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:24:54.112719 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:24:54.112732 systemd-journald[1307]: Journal started Dec 16 12:24:54.112754 systemd-journald[1307]: Runtime Journal (/run/log/journal/afb28e88a5cd4fb1a44b2602bcfac929) is 8M, max 319.5M, 311.5M free. Dec 16 12:24:53.969000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:24:54.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.061000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:24:54.061000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:24:54.062000 audit: BPF prog-id=15 op=LOAD Dec 16 12:24:54.062000 audit: BPF prog-id=16 op=LOAD Dec 16 12:24:54.063000 audit: BPF prog-id=17 op=LOAD Dec 16 12:24:54.108000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:24:54.108000 audit[1307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffca2554d0 a2=4000 a3=0 items=0 ppid=1 pid=1307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:54.108000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:24:53.876797 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:24:53.900775 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:24:53.901216 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:24:54.114699 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:24:54.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.115720 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:24:54.118339 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:24:54.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.119666 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:24:54.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.121064 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:24:54.121246 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:24:54.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.122618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:24:54.122794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:24:54.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.124072 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:24:54.124234 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:24:54.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.126647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:24:54.127184 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:24:54.128638 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:24:54.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.128858 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:24:54.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.130184 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:24:54.130459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:24:54.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.131781 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:24:54.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.135203 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:24:54.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.137405 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:24:54.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.139481 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:24:54.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.151980 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:24:54.153869 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:24:54.155103 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:24:54.155137 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:24:54.157042 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:24:54.158403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:24:54.158517 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:24:54.160600 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:24:54.162520 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:24:54.163576 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:24:54.167447 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:24:54.168422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:24:54.175689 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:24:54.177771 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:24:54.179809 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:24:54.184631 systemd-journald[1307]: Time spent on flushing to /var/log/journal/afb28e88a5cd4fb1a44b2602bcfac929 is 24.435ms for 1816 entries. Dec 16 12:24:54.184631 systemd-journald[1307]: System Journal (/var/log/journal/afb28e88a5cd4fb1a44b2602bcfac929) is 8M, max 588.1M, 580.1M free. Dec 16 12:24:54.228802 systemd-journald[1307]: Received client request to flush runtime journal. Dec 16 12:24:54.228917 kernel: loop1: detected capacity change from 0 to 109872 Dec 16 12:24:54.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.194463 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:24:54.196169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:24:54.198043 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:24:54.200567 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:24:54.216804 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:24:54.230766 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:24:54.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.234772 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Dec 16 12:24:54.234783 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Dec 16 12:24:54.241443 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:24:54.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.245179 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:24:54.256523 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:24:54.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.266329 kernel: loop2: detected capacity change from 0 to 200800 Dec 16 12:24:54.291840 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:24:54.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.293000 audit: BPF prog-id=18 op=LOAD Dec 16 12:24:54.293000 audit: BPF prog-id=19 op=LOAD Dec 16 12:24:54.293000 audit: BPF prog-id=20 op=LOAD Dec 16 12:24:54.296000 audit: BPF prog-id=21 op=LOAD Dec 16 12:24:54.295075 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:24:54.297563 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:24:54.302455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:24:54.307329 kernel: loop3: detected capacity change from 0 to 1648 Dec 16 12:24:54.308000 audit: BPF prog-id=22 op=LOAD Dec 16 12:24:54.308000 audit: BPF prog-id=23 op=LOAD Dec 16 12:24:54.308000 audit: BPF prog-id=24 op=LOAD Dec 16 12:24:54.310442 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:24:54.311000 audit: BPF prog-id=25 op=LOAD Dec 16 12:24:54.312000 audit: BPF prog-id=26 op=LOAD Dec 16 12:24:54.312000 audit: BPF prog-id=27 op=LOAD Dec 16 12:24:54.313001 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:24:54.320455 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Dec 16 12:24:54.320473 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Dec 16 12:24:54.328586 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:24:54.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.342632 kernel: loop4: detected capacity change from 0 to 100192 Dec 16 12:24:54.353810 systemd-nsresourced[1375]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:24:54.355177 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:24:54.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.358710 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:24:54.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.385327 kernel: loop5: detected capacity change from 0 to 109872 Dec 16 12:24:54.402239 systemd-oomd[1372]: No swap; memory pressure usage will be degraded Dec 16 12:24:54.402743 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:24:54.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.406318 kernel: loop6: detected capacity change from 0 to 200800 Dec 16 12:24:54.419143 systemd-resolved[1373]: Positive Trust Anchors: Dec 16 12:24:54.419165 systemd-resolved[1373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:24:54.419169 systemd-resolved[1373]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:24:54.419200 systemd-resolved[1373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:24:54.429037 systemd-resolved[1373]: Using system hostname 'ci-4515-1-0-7-179ea8c226'. Dec 16 12:24:54.430319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:24:54.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.431997 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:24:54.443362 kernel: loop7: detected capacity change from 0 to 1648 Dec 16 12:24:54.454502 kernel: loop1: detected capacity change from 0 to 100192 Dec 16 12:24:54.470566 (sd-merge)[1395]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Dec 16 12:24:54.473487 (sd-merge)[1395]: Merged extensions into '/usr'. Dec 16 12:24:54.477490 systemd[1]: Reload requested from client PID 1353 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:24:54.477512 systemd[1]: Reloading... Dec 16 12:24:54.540329 zram_generator::config[1428]: No configuration found. Dec 16 12:24:54.692482 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:24:54.692816 systemd[1]: Reloading finished in 214 ms. Dec 16 12:24:54.726267 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:24:54.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.729323 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:24:54.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:54.744481 systemd[1]: Starting ensure-sysext.service... Dec 16 12:24:54.746199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:24:54.747000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:24:54.747000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:24:54.747000 audit: BPF prog-id=28 op=LOAD Dec 16 12:24:54.747000 audit: BPF prog-id=29 op=LOAD Dec 16 12:24:54.748970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:24:54.750000 audit: BPF prog-id=30 op=LOAD Dec 16 12:24:54.750000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:24:54.751000 audit: BPF prog-id=31 op=LOAD Dec 16 12:24:54.751000 audit: BPF prog-id=32 op=LOAD Dec 16 12:24:54.751000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:24:54.751000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:24:54.751000 audit: BPF prog-id=33 op=LOAD Dec 16 12:24:54.751000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:24:54.751000 audit: BPF prog-id=34 op=LOAD Dec 16 12:24:54.751000 audit: BPF prog-id=35 op=LOAD Dec 16 12:24:54.751000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:24:54.751000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:24:54.753000 audit: BPF prog-id=36 op=LOAD Dec 16 12:24:54.753000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:24:54.753000 audit: BPF prog-id=37 op=LOAD Dec 16 12:24:54.753000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:24:54.753000 audit: BPF prog-id=38 op=LOAD Dec 16 12:24:54.753000 audit: BPF prog-id=39 op=LOAD Dec 16 12:24:54.753000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:24:54.753000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:24:54.754000 audit: BPF prog-id=40 op=LOAD Dec 16 12:24:54.754000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:24:54.754000 audit: BPF prog-id=41 op=LOAD Dec 16 12:24:54.754000 audit: BPF prog-id=42 op=LOAD Dec 16 12:24:54.754000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:24:54.754000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:24:54.764021 systemd-tmpfiles[1463]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:24:54.764057 systemd-tmpfiles[1463]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:24:54.764350 systemd-tmpfiles[1463]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:24:54.765324 systemd-tmpfiles[1463]: ACLs are not supported, ignoring. Dec 16 12:24:54.765399 systemd-tmpfiles[1463]: ACLs are not supported, ignoring. Dec 16 12:24:54.766884 systemd[1]: Reload requested from client PID 1462 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:24:54.766903 systemd[1]: Reloading... Dec 16 12:24:54.772666 systemd-tmpfiles[1463]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:24:54.772682 systemd-tmpfiles[1463]: Skipping /boot Dec 16 12:24:54.773115 systemd-udevd[1464]: Using default interface naming scheme 'v257'. Dec 16 12:24:54.780965 systemd-tmpfiles[1463]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:24:54.780984 systemd-tmpfiles[1463]: Skipping /boot Dec 16 12:24:54.823334 zram_generator::config[1496]: No configuration found. Dec 16 12:24:54.957336 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:24:54.997244 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Dec 16 12:24:54.997522 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 12:24:54.997551 kernel: [drm] features: -context_init Dec 16 12:24:54.998658 kernel: [drm] number of scanouts: 1 Dec 16 12:24:55.002316 kernel: [drm] number of cap sets: 0 Dec 16 12:24:55.003324 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Dec 16 12:24:55.008225 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:24:55.009600 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:24:55.009645 systemd[1]: Reloading finished in 242 ms. Dec 16 12:24:55.019441 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:24:55.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.023000 audit: BPF prog-id=43 op=LOAD Dec 16 12:24:55.023000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:24:55.023000 audit: BPF prog-id=44 op=LOAD Dec 16 12:24:55.023000 audit: BPF prog-id=45 op=LOAD Dec 16 12:24:55.023000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:24:55.023000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:24:55.024000 audit: BPF prog-id=46 op=LOAD Dec 16 12:24:55.024000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:24:55.025380 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 12:24:55.027000 audit: BPF prog-id=47 op=LOAD Dec 16 12:24:55.027000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:24:55.027000 audit: BPF prog-id=48 op=LOAD Dec 16 12:24:55.027000 audit: BPF prog-id=49 op=LOAD Dec 16 12:24:55.027000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:24:55.027000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:24:55.028000 audit: BPF prog-id=50 op=LOAD Dec 16 12:24:55.028000 audit: BPF prog-id=51 op=LOAD Dec 16 12:24:55.028000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:24:55.028000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:24:55.029000 audit: BPF prog-id=52 op=LOAD Dec 16 12:24:55.029000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:24:55.030309 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 12:24:55.030000 audit: BPF prog-id=53 op=LOAD Dec 16 12:24:55.030000 audit: BPF prog-id=54 op=LOAD Dec 16 12:24:55.030000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:24:55.030000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:24:55.040000 audit: BPF prog-id=55 op=LOAD Dec 16 12:24:55.040000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:24:55.040000 audit: BPF prog-id=56 op=LOAD Dec 16 12:24:55.040000 audit: BPF prog-id=57 op=LOAD Dec 16 12:24:55.040000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:24:55.040000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:24:55.045286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:24:55.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.080034 systemd[1]: Finished ensure-sysext.service. Dec 16 12:24:55.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.084439 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:24:55.088557 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:24:55.089846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:24:55.090782 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:24:55.094761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:24:55.097000 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:24:55.099011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:24:55.100932 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:24:55.105444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:24:55.108125 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Dec 16 12:24:55.109870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:24:55.109984 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:24:55.111165 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:24:55.114940 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:24:55.116465 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:24:55.117558 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:24:55.120000 audit: BPF prog-id=58 op=LOAD Dec 16 12:24:55.124012 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:24:55.124065 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:24:55.124980 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:24:55.127864 kernel: PTP clock support registered Dec 16 12:24:55.126109 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:24:55.128523 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:24:55.133234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:24:55.135792 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:24:55.136057 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:24:55.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.139186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:24:55.143552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:24:55.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.145760 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:24:55.146102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:24:55.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.150000 audit[1604]: SYSTEM_BOOT pid=1604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.150613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:24:55.151117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:24:55.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.153280 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:24:55.153675 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:24:55.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.157877 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:24:55.158088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:24:55.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.159893 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Dec 16 12:24:55.160174 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Dec 16 12:24:55.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.162542 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:24:55.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.167915 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:24:55.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:55.185000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:24:55.185000 audit[1630]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8a55e70 a2=420 a3=0 items=0 ppid=1584 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:55.185000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:24:55.185788 augenrules[1630]: No rules Dec 16 12:24:55.188543 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:24:55.190770 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:24:55.192268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:24:55.192427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:24:55.192933 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:24:55.194344 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:24:55.198576 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:24:55.202605 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:24:55.206615 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:24:55.249783 systemd-networkd[1602]: lo: Link UP Dec 16 12:24:55.249795 systemd-networkd[1602]: lo: Gained carrier Dec 16 12:24:55.251005 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:24:55.252458 systemd[1]: Reached target network.target - Network. Dec 16 12:24:55.252723 systemd-networkd[1602]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:24:55.252728 systemd-networkd[1602]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:24:55.253437 systemd-networkd[1602]: eth0: Link UP Dec 16 12:24:55.253588 systemd-networkd[1602]: eth0: Gained carrier Dec 16 12:24:55.253606 systemd-networkd[1602]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:24:55.254658 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:24:55.257104 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:24:55.265340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:24:55.268371 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:24:55.270044 systemd-networkd[1602]: eth0: DHCPv4 address 10.0.21.226/25, gateway 10.0.21.129 acquired from 10.0.21.129 Dec 16 12:24:55.270478 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:24:55.276385 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:24:55.793403 ldconfig[1594]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:24:55.799399 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:24:55.801705 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:24:55.821843 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:24:55.823175 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:24:55.825458 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:24:55.826516 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:24:55.827770 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:24:55.828802 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:24:55.829903 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:24:55.831267 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:24:55.832190 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:24:55.833348 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:24:55.833381 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:24:55.834126 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:24:55.836594 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:24:55.838854 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:24:55.841554 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:24:55.842836 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:24:55.843976 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:24:55.846900 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:24:55.848105 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:24:55.849764 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:24:55.850825 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:24:55.851684 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:24:55.852527 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:24:55.852559 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:24:55.855971 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:24:55.857655 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:24:55.859668 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:24:55.862474 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:24:55.864173 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:24:55.867331 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:24:55.867845 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:24:55.878269 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:24:55.879396 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:24:55.880548 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:24:55.881969 jq[1665]: false Dec 16 12:24:55.882426 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:24:55.885625 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:24:55.890175 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:24:55.895050 extend-filesystems[1667]: Found /dev/vda6 Dec 16 12:24:55.902321 extend-filesystems[1667]: Found /dev/vda9 Dec 16 12:24:55.902321 extend-filesystems[1667]: Checking size of /dev/vda9 Dec 16 12:24:55.899478 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:24:55.900472 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:24:55.901031 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:24:55.903472 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:24:55.906477 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:24:55.912185 chronyd[1659]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:24:55.913385 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:24:55.915482 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:24:55.915540 chronyd[1659]: Loaded seccomp filter (level 2) Dec 16 12:24:55.915726 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:24:55.915942 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:24:55.917619 extend-filesystems[1667]: Resized partition /dev/vda9 Dec 16 12:24:55.924828 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:24:55.925045 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:24:55.926471 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:24:55.926683 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:24:55.927908 jq[1681]: true Dec 16 12:24:55.941381 extend-filesystems[1699]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:24:55.942501 update_engine[1679]: I20251216 12:24:55.940913 1679 main.cc:92] Flatcar Update Engine starting Dec 16 12:24:55.944631 jq[1705]: true Dec 16 12:24:55.965225 tar[1698]: linux-arm64/LICENSE Dec 16 12:24:55.965484 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Dec 16 12:24:55.965512 tar[1698]: linux-arm64/helm Dec 16 12:24:55.995250 dbus-daemon[1662]: [system] SELinux support is enabled Dec 16 12:24:55.995532 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:24:56.005374 update_engine[1679]: I20251216 12:24:56.001470 1679 update_check_scheduler.cc:74] Next update check in 6m6s Dec 16 12:24:56.002115 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:24:56.002142 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:24:56.003630 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:24:56.003717 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:24:56.005035 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:24:56.009107 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:24:56.018241 systemd-logind[1677]: New seat seat0. Dec 16 12:24:56.078445 systemd-logind[1677]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:24:56.078469 systemd-logind[1677]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 16 12:24:56.078801 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:24:56.085472 locksmithd[1731]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:24:56.093204 containerd[1706]: time="2025-12-16T12:24:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:24:56.093938 containerd[1706]: time="2025-12-16T12:24:56.093861000Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:24:56.106157 containerd[1706]: time="2025-12-16T12:24:56.106109000Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.04µs" Dec 16 12:24:56.106157 containerd[1706]: time="2025-12-16T12:24:56.106146520Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:24:56.106327 containerd[1706]: time="2025-12-16T12:24:56.106191320Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:24:56.106327 containerd[1706]: time="2025-12-16T12:24:56.106203400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:24:56.107721 bash[1730]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:24:56.109351 containerd[1706]: time="2025-12-16T12:24:56.109203800Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:24:56.109351 containerd[1706]: time="2025-12-16T12:24:56.109239840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:24:56.109351 containerd[1706]: time="2025-12-16T12:24:56.109313200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:24:56.109351 containerd[1706]: time="2025-12-16T12:24:56.109325080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:24:56.109890 containerd[1706]: time="2025-12-16T12:24:56.109805440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:24:56.109890 containerd[1706]: time="2025-12-16T12:24:56.109827560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:24:56.109890 containerd[1706]: time="2025-12-16T12:24:56.109847360Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:24:56.109890 containerd[1706]: time="2025-12-16T12:24:56.109856200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:24:56.110242 containerd[1706]: time="2025-12-16T12:24:56.110004200Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:24:56.110242 containerd[1706]: time="2025-12-16T12:24:56.110016880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:24:56.110242 containerd[1706]: time="2025-12-16T12:24:56.110081760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:24:56.110242 containerd[1706]: time="2025-12-16T12:24:56.110233880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:24:56.110537 containerd[1706]: time="2025-12-16T12:24:56.110260400Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:24:56.110537 containerd[1706]: time="2025-12-16T12:24:56.110270400Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:24:56.110537 containerd[1706]: time="2025-12-16T12:24:56.110328360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:24:56.110537 containerd[1706]: time="2025-12-16T12:24:56.110532400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:24:56.110859 containerd[1706]: time="2025-12-16T12:24:56.110601920Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:24:56.111174 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:24:56.115008 systemd[1]: Starting sshkeys.service... Dec 16 12:24:56.136956 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:24:56.139792 containerd[1706]: time="2025-12-16T12:24:56.139690720Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:24:56.139792 containerd[1706]: time="2025-12-16T12:24:56.139771360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139864520Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139878720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139892760Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139905360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139918040Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139928000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139951360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139964200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139974760Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.139990560Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.140002160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.140014640Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:24:56.140502 containerd[1706]: time="2025-12-16T12:24:56.140148280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140168320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140182280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140192800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140205280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140215760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140226480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140237240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140247320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140259000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140269600Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140327480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140370040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140383640Z" level=info msg="Start snapshots syncer" Dec 16 12:24:56.140751 containerd[1706]: time="2025-12-16T12:24:56.140416000Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:24:56.143562 containerd[1706]: time="2025-12-16T12:24:56.140662280Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:24:56.143562 containerd[1706]: time="2025-12-16T12:24:56.140711560Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:24:56.141581 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140765480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140872200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140895200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140906480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140916280Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140927720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140953800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140966960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140977080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.140995160Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.141042320Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.141057000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:24:56.143738 containerd[1706]: time="2025-12-16T12:24:56.141065200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:24:56.143944 containerd[1706]: time="2025-12-16T12:24:56.141077600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:24:56.143944 containerd[1706]: time="2025-12-16T12:24:56.141085400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:24:56.143944 containerd[1706]: time="2025-12-16T12:24:56.141096960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:24:56.143944 containerd[1706]: time="2025-12-16T12:24:56.141107920Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:24:56.143944 containerd[1706]: time="2025-12-16T12:24:56.141119640Z" level=info msg="runtime interface created" Dec 16 12:24:56.143944 containerd[1706]: time="2025-12-16T12:24:56.141124800Z" level=info msg="created NRI interface" Dec 16 12:24:56.143944 containerd[1706]: time="2025-12-16T12:24:56.141132440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:24:56.143944 containerd[1706]: time="2025-12-16T12:24:56.141143640Z" level=info msg="Connect containerd service" Dec 16 12:24:56.143944 containerd[1706]: time="2025-12-16T12:24:56.141163280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:24:56.147625 containerd[1706]: time="2025-12-16T12:24:56.147570320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:24:56.157333 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:24:56.230528 containerd[1706]: time="2025-12-16T12:24:56.230468720Z" level=info msg="Start subscribing containerd event" Dec 16 12:24:56.230528 containerd[1706]: time="2025-12-16T12:24:56.230538960Z" level=info msg="Start recovering state" Dec 16 12:24:56.230998 containerd[1706]: time="2025-12-16T12:24:56.230971480Z" level=info msg="Start event monitor" Dec 16 12:24:56.231034 containerd[1706]: time="2025-12-16T12:24:56.231002480Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:24:56.231034 containerd[1706]: time="2025-12-16T12:24:56.231011920Z" level=info msg="Start streaming server" Dec 16 12:24:56.231034 containerd[1706]: time="2025-12-16T12:24:56.231020440Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:24:56.231034 containerd[1706]: time="2025-12-16T12:24:56.231028240Z" level=info msg="runtime interface starting up..." Dec 16 12:24:56.231107 containerd[1706]: time="2025-12-16T12:24:56.231034200Z" level=info msg="starting plugins..." Dec 16 12:24:56.231107 containerd[1706]: time="2025-12-16T12:24:56.231065040Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:24:56.231456 containerd[1706]: time="2025-12-16T12:24:56.231429040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:24:56.231572 containerd[1706]: time="2025-12-16T12:24:56.231551320Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:24:56.234403 containerd[1706]: time="2025-12-16T12:24:56.234368280Z" level=info msg="containerd successfully booted in 0.141512s" Dec 16 12:24:56.234549 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:24:56.258341 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Dec 16 12:24:56.277700 extend-filesystems[1699]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:24:56.277700 extend-filesystems[1699]: old_desc_blocks = 1, new_desc_blocks = 6 Dec 16 12:24:56.277700 extend-filesystems[1699]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Dec 16 12:24:56.281780 extend-filesystems[1667]: Resized filesystem in /dev/vda9 Dec 16 12:24:56.279267 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:24:56.279577 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:24:56.391905 tar[1698]: linux-arm64/README.md Dec 16 12:24:56.408832 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:24:56.703253 sshd_keygen[1695]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:24:56.723531 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:24:56.726354 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:24:56.741957 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:24:56.743343 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:24:56.746365 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:24:56.764193 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:24:56.767202 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:24:56.769566 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:24:56.770841 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:24:56.881368 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:24:57.169323 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:24:57.277467 systemd-networkd[1602]: eth0: Gained IPv6LL Dec 16 12:24:57.279760 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:24:57.281654 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:24:57.284198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:57.286723 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:24:57.322198 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:24:58.108011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:58.112161 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:24:58.603145 kubelet[1803]: E1216 12:24:58.603027 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:24:58.605313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:24:58.605449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:24:58.606427 systemd[1]: kubelet.service: Consumed 717ms CPU time, 249M memory peak. Dec 16 12:24:58.891333 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:24:59.181336 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:25:02.899342 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:25:02.909639 coreos-metadata[1661]: Dec 16 12:25:02.909 WARN failed to locate config-drive, using the metadata service API instead Dec 16 12:25:02.927349 coreos-metadata[1661]: Dec 16 12:25:02.927 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 12:25:03.196350 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:25:03.202195 coreos-metadata[1745]: Dec 16 12:25:03.202 WARN failed to locate config-drive, using the metadata service API instead Dec 16 12:25:03.214681 coreos-metadata[1745]: Dec 16 12:25:03.214 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 12:25:03.775727 coreos-metadata[1661]: Dec 16 12:25:03.775 INFO Fetch successful Dec 16 12:25:03.775931 coreos-metadata[1661]: Dec 16 12:25:03.775 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 12:25:03.940082 coreos-metadata[1745]: Dec 16 12:25:03.939 INFO Fetch successful Dec 16 12:25:03.940082 coreos-metadata[1745]: Dec 16 12:25:03.940 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 12:25:04.183927 coreos-metadata[1661]: Dec 16 12:25:04.183 INFO Fetch successful Dec 16 12:25:04.183927 coreos-metadata[1661]: Dec 16 12:25:04.183 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 12:25:04.186558 coreos-metadata[1745]: Dec 16 12:25:04.186 INFO Fetch successful Dec 16 12:25:04.188869 unknown[1745]: wrote ssh authorized keys file for user: core Dec 16 12:25:04.222163 update-ssh-keys[1823]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:25:04.223463 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:25:04.225454 systemd[1]: Finished sshkeys.service. Dec 16 12:25:04.318391 coreos-metadata[1661]: Dec 16 12:25:04.318 INFO Fetch successful Dec 16 12:25:04.318543 coreos-metadata[1661]: Dec 16 12:25:04.318 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 12:25:04.454451 coreos-metadata[1661]: Dec 16 12:25:04.454 INFO Fetch successful Dec 16 12:25:04.454451 coreos-metadata[1661]: Dec 16 12:25:04.454 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 12:25:04.590072 coreos-metadata[1661]: Dec 16 12:25:04.589 INFO Fetch successful Dec 16 12:25:04.590072 coreos-metadata[1661]: Dec 16 12:25:04.589 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 12:25:04.722995 coreos-metadata[1661]: Dec 16 12:25:04.722 INFO Fetch successful Dec 16 12:25:04.777989 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:25:04.779841 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:25:04.780433 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:25:04.781479 systemd[1]: Startup finished in 2.472s (kernel) + 14.181s (initrd) + 11.489s (userspace) = 28.143s. Dec 16 12:25:05.619382 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:25:05.620543 systemd[1]: Started sshd@0-10.0.21.226:22-139.178.68.195:40836.service - OpenSSH per-connection server daemon (139.178.68.195:40836). Dec 16 12:25:06.520795 sshd[1832]: Accepted publickey for core from 139.178.68.195 port 40836 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:25:06.523509 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:06.529752 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:25:06.530654 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:25:06.534574 systemd-logind[1677]: New session 1 of user core. Dec 16 12:25:06.555263 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:25:06.557894 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:25:06.585915 (systemd)[1837]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:25:06.588628 systemd-logind[1677]: New session c1 of user core. Dec 16 12:25:06.709763 systemd[1837]: Queued start job for default target default.target. Dec 16 12:25:06.720509 systemd[1837]: Created slice app.slice - User Application Slice. Dec 16 12:25:06.720545 systemd[1837]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:25:06.720558 systemd[1837]: Reached target paths.target - Paths. Dec 16 12:25:06.720611 systemd[1837]: Reached target timers.target - Timers. Dec 16 12:25:06.721877 systemd[1837]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:25:06.722662 systemd[1837]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:25:06.732384 systemd[1837]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:25:06.732511 systemd[1837]: Reached target sockets.target - Sockets. Dec 16 12:25:06.733113 systemd[1837]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:25:06.733219 systemd[1837]: Reached target basic.target - Basic System. Dec 16 12:25:06.733306 systemd[1837]: Reached target default.target - Main User Target. Dec 16 12:25:06.733343 systemd[1837]: Startup finished in 138ms. Dec 16 12:25:06.733500 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:25:06.735277 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:25:07.245831 systemd[1]: Started sshd@1-10.0.21.226:22-139.178.68.195:40848.service - OpenSSH per-connection server daemon (139.178.68.195:40848). Dec 16 12:25:08.114363 sshd[1850]: Accepted publickey for core from 139.178.68.195 port 40848 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:25:08.115682 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:08.119583 systemd-logind[1677]: New session 2 of user core. Dec 16 12:25:08.127511 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:25:08.612226 sshd[1853]: Connection closed by 139.178.68.195 port 40848 Dec 16 12:25:08.612619 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:08.617075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:25:08.617537 systemd[1]: sshd@1-10.0.21.226:22-139.178.68.195:40848.service: Deactivated successfully. Dec 16 12:25:08.619144 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:25:08.620283 systemd-logind[1677]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:25:08.622130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:25:08.622597 systemd-logind[1677]: Removed session 2. Dec 16 12:25:08.742331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:08.745817 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:25:08.779638 systemd[1]: Started sshd@2-10.0.21.226:22-139.178.68.195:40852.service - OpenSSH per-connection server daemon (139.178.68.195:40852). Dec 16 12:25:08.782022 kubelet[1866]: E1216 12:25:08.781965 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:25:08.785188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:25:08.785349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:25:08.795573 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.2M memory peak. Dec 16 12:25:09.622018 sshd[1874]: Accepted publickey for core from 139.178.68.195 port 40852 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:25:09.623360 sshd-session[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:09.627298 systemd-logind[1677]: New session 3 of user core. Dec 16 12:25:09.646747 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:25:10.099493 sshd[1878]: Connection closed by 139.178.68.195 port 40852 Dec 16 12:25:10.099261 sshd-session[1874]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:10.103813 systemd[1]: sshd@2-10.0.21.226:22-139.178.68.195:40852.service: Deactivated successfully. Dec 16 12:25:10.105544 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:25:10.106389 systemd-logind[1677]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:25:10.107515 systemd-logind[1677]: Removed session 3. Dec 16 12:25:10.268732 systemd[1]: Started sshd@3-10.0.21.226:22-139.178.68.195:59332.service - OpenSSH per-connection server daemon (139.178.68.195:59332). Dec 16 12:25:11.131585 sshd[1884]: Accepted publickey for core from 139.178.68.195 port 59332 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:25:11.132760 sshd-session[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:11.136741 systemd-logind[1677]: New session 4 of user core. Dec 16 12:25:11.146668 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:25:11.617482 sshd[1887]: Connection closed by 139.178.68.195 port 59332 Dec 16 12:25:11.617405 sshd-session[1884]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:11.621482 systemd-logind[1677]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:25:11.621736 systemd[1]: sshd@3-10.0.21.226:22-139.178.68.195:59332.service: Deactivated successfully. Dec 16 12:25:11.623465 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:25:11.626071 systemd-logind[1677]: Removed session 4. Dec 16 12:25:11.799626 systemd[1]: Started sshd@4-10.0.21.226:22-139.178.68.195:59336.service - OpenSSH per-connection server daemon (139.178.68.195:59336). Dec 16 12:25:12.643992 sshd[1893]: Accepted publickey for core from 139.178.68.195 port 59336 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:25:12.645319 sshd-session[1893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:12.649204 systemd-logind[1677]: New session 5 of user core. Dec 16 12:25:12.655467 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:25:12.986657 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:25:12.986933 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:25:13.002496 sudo[1897]: pam_unix(sudo:session): session closed for user root Dec 16 12:25:13.163348 sshd[1896]: Connection closed by 139.178.68.195 port 59336 Dec 16 12:25:13.163189 sshd-session[1893]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:13.167438 systemd[1]: sshd@4-10.0.21.226:22-139.178.68.195:59336.service: Deactivated successfully. Dec 16 12:25:13.169085 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:25:13.169860 systemd-logind[1677]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:25:13.170859 systemd-logind[1677]: Removed session 5. Dec 16 12:25:13.329944 systemd[1]: Started sshd@5-10.0.21.226:22-139.178.68.195:59338.service - OpenSSH per-connection server daemon (139.178.68.195:59338). Dec 16 12:25:14.155373 sshd[1903]: Accepted publickey for core from 139.178.68.195 port 59338 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:25:14.156790 sshd-session[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:14.161578 systemd-logind[1677]: New session 6 of user core. Dec 16 12:25:14.172508 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:25:14.477205 sudo[1908]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:25:14.477504 sudo[1908]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:25:14.482330 sudo[1908]: pam_unix(sudo:session): session closed for user root Dec 16 12:25:14.488270 sudo[1907]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:25:14.488850 sudo[1907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:25:14.497188 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:25:14.529000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:25:14.530385 augenrules[1930]: No rules Dec 16 12:25:14.530850 kernel: kauditd_printk_skb: 191 callbacks suppressed Dec 16 12:25:14.530906 kernel: audit: type=1305 audit(1765887914.529:235): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:25:14.533027 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:25:14.533269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:25:14.529000 audit[1930]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe68537e0 a2=420 a3=0 items=0 ppid=1911 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:14.534183 sudo[1907]: pam_unix(sudo:session): session closed for user root Dec 16 12:25:14.539677 kernel: audit: type=1300 audit(1765887914.529:235): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe68537e0 a2=420 a3=0 items=0 ppid=1911 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:14.539792 kernel: audit: type=1327 audit(1765887914.529:235): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:25:14.529000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:25:14.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.544012 kernel: audit: type=1130 audit(1765887914.533:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.546901 kernel: audit: type=1131 audit(1765887914.533:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.546980 kernel: audit: type=1106 audit(1765887914.533:238): pid=1907 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.533000 audit[1907]: USER_END pid=1907 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.534000 audit[1907]: CRED_DISP pid=1907 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.552775 kernel: audit: type=1104 audit(1765887914.534:239): pid=1907 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.692021 sshd[1906]: Connection closed by 139.178.68.195 port 59338 Dec 16 12:25:14.692607 sshd-session[1903]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:14.693000 audit[1903]: USER_END pid=1903 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:14.696485 systemd[1]: sshd@5-10.0.21.226:22-139.178.68.195:59338.service: Deactivated successfully. Dec 16 12:25:14.698029 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:25:14.693000 audit[1903]: CRED_DISP pid=1903 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:14.701977 kernel: audit: type=1106 audit(1765887914.693:240): pid=1903 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:14.702074 kernel: audit: type=1104 audit(1765887914.693:241): pid=1903 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:14.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.21.226:22-139.178.68.195:59338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.705265 kernel: audit: type=1131 audit(1765887914.696:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.21.226:22-139.178.68.195:59338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.705519 systemd-logind[1677]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:25:14.706688 systemd-logind[1677]: Removed session 6. Dec 16 12:25:14.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.21.226:22-139.178.68.195:59352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.879039 systemd[1]: Started sshd@6-10.0.21.226:22-139.178.68.195:59352.service - OpenSSH per-connection server daemon (139.178.68.195:59352). Dec 16 12:25:15.751000 audit[1939]: USER_ACCT pid=1939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:15.751821 sshd[1939]: Accepted publickey for core from 139.178.68.195 port 59352 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:25:15.752000 audit[1939]: CRED_ACQ pid=1939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:15.752000 audit[1939]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd528e120 a2=3 a3=0 items=0 ppid=1 pid=1939 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.752000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:15.753029 sshd-session[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:15.757817 systemd-logind[1677]: New session 7 of user core. Dec 16 12:25:15.764590 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:25:15.767000 audit[1939]: USER_START pid=1939 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:15.768000 audit[1942]: CRED_ACQ pid=1942 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:16.090000 audit[1943]: USER_ACCT pid=1943 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:16.091000 audit[1943]: CRED_REFR pid=1943 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:16.091275 sudo[1943]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:25:16.091598 sudo[1943]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:25:16.093000 audit[1943]: USER_START pid=1943 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:16.440129 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:25:16.469721 (dockerd)[1963]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:25:16.730713 dockerd[1963]: time="2025-12-16T12:25:16.730438360Z" level=info msg="Starting up" Dec 16 12:25:16.731679 dockerd[1963]: time="2025-12-16T12:25:16.731654160Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:25:16.743071 dockerd[1963]: time="2025-12-16T12:25:16.742204320Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:25:16.788228 dockerd[1963]: time="2025-12-16T12:25:16.788154720Z" level=info msg="Loading containers: start." Dec 16 12:25:16.798324 kernel: Initializing XFRM netlink socket Dec 16 12:25:16.846000 audit[2015]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.846000 audit[2015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc3cff6f0 a2=0 a3=0 items=0 ppid=1963 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.846000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:25:16.847000 audit[2017]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.847000 audit[2017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffce539b30 a2=0 a3=0 items=0 ppid=1963 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:25:16.849000 audit[2019]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.849000 audit[2019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6561460 a2=0 a3=0 items=0 ppid=1963 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:25:16.851000 audit[2021]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.851000 audit[2021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1643a60 a2=0 a3=0 items=0 ppid=1963 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:25:16.852000 audit[2023]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.852000 audit[2023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd6cc8660 a2=0 a3=0 items=0 ppid=1963 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:25:16.854000 audit[2025]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.854000 audit[2025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd3639660 a2=0 a3=0 items=0 ppid=1963 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.854000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:25:16.855000 audit[2027]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.855000 audit[2027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd184d730 a2=0 a3=0 items=0 ppid=1963 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.855000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:25:16.857000 audit[2029]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.857000 audit[2029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff70cab50 a2=0 a3=0 items=0 ppid=1963 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:25:16.891000 audit[2032]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.891000 audit[2032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffff7418c80 a2=0 a3=0 items=0 ppid=1963 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:25:16.893000 audit[2034]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.893000 audit[2034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffec531890 a2=0 a3=0 items=0 ppid=1963 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:25:16.895000 audit[2036]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.895000 audit[2036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff3937510 a2=0 a3=0 items=0 ppid=1963 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.895000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:25:16.896000 audit[2038]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.896000 audit[2038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd90a96f0 a2=0 a3=0 items=0 ppid=1963 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:25:16.898000 audit[2040]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.898000 audit[2040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc567dc00 a2=0 a3=0 items=0 ppid=1963 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.898000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:25:16.934000 audit[2070]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.934000 audit[2070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffffb29880 a2=0 a3=0 items=0 ppid=1963 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:25:16.936000 audit[2072]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.936000 audit[2072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe9faa5a0 a2=0 a3=0 items=0 ppid=1963 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:25:16.938000 audit[2074]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.938000 audit[2074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0899fe0 a2=0 a3=0 items=0 ppid=1963 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:25:16.940000 audit[2076]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.940000 audit[2076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff33fff0 a2=0 a3=0 items=0 ppid=1963 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:25:16.942000 audit[2078]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.942000 audit[2078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc6644c20 a2=0 a3=0 items=0 ppid=1963 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.942000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:25:16.944000 audit[2080]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.944000 audit[2080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcd55b220 a2=0 a3=0 items=0 ppid=1963 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.944000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:25:16.946000 audit[2082]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.946000 audit[2082]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc7fa09b0 a2=0 a3=0 items=0 ppid=1963 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:25:16.948000 audit[2084]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.948000 audit[2084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff2f605a0 a2=0 a3=0 items=0 ppid=1963 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.948000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:25:16.950000 audit[2086]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.950000 audit[2086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff7f9d520 a2=0 a3=0 items=0 ppid=1963 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:25:16.952000 audit[2088]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.952000 audit[2088]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd24d2920 a2=0 a3=0 items=0 ppid=1963 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:25:16.953000 audit[2090]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.953000 audit[2090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc0f634b0 a2=0 a3=0 items=0 ppid=1963 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.953000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:25:16.956000 audit[2092]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.956000 audit[2092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe2d3e240 a2=0 a3=0 items=0 ppid=1963 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.956000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:25:16.958000 audit[2094]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.958000 audit[2094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffdfa91c10 a2=0 a3=0 items=0 ppid=1963 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:25:16.965000 audit[2099]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.965000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd6376900 a2=0 a3=0 items=0 ppid=1963 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:25:16.967000 audit[2101]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.967000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffdbe1f00 a2=0 a3=0 items=0 ppid=1963 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.967000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:25:16.968000 audit[2103]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.968000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd5d3a7c0 a2=0 a3=0 items=0 ppid=1963 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:25:16.970000 audit[2105]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.970000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffeb23b30 a2=0 a3=0 items=0 ppid=1963 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:25:16.972000 audit[2107]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.972000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd4d8efc0 a2=0 a3=0 items=0 ppid=1963 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.972000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:25:16.974000 audit[2109]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:16.974000 audit[2109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd3fe1d40 a2=0 a3=0 items=0 ppid=1963 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.974000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:25:16.995000 audit[2114]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.995000 audit[2114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd1b681f0 a2=0 a3=0 items=0 ppid=1963 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:25:16.997000 audit[2116]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:16.997000 audit[2116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc5806fb0 a2=0 a3=0 items=0 ppid=1963 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:25:17.004000 audit[2124]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:17.004000 audit[2124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff1bc6e70 a2=0 a3=0 items=0 ppid=1963 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.004000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:25:17.014000 audit[2130]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:17.014000 audit[2130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcb6b5470 a2=0 a3=0 items=0 ppid=1963 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.014000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:25:17.016000 audit[2132]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:17.016000 audit[2132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffe9b025c0 a2=0 a3=0 items=0 ppid=1963 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.016000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:25:17.018000 audit[2134]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:17.018000 audit[2134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffed8c9340 a2=0 a3=0 items=0 ppid=1963 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.018000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:25:17.020000 audit[2136]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:17.020000 audit[2136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd1d5bc70 a2=0 a3=0 items=0 ppid=1963 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.020000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:25:17.022000 audit[2138]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:17.022000 audit[2138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc80d9640 a2=0 a3=0 items=0 ppid=1963 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.022000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:25:17.023188 systemd-networkd[1602]: docker0: Link UP Dec 16 12:25:17.028901 dockerd[1963]: time="2025-12-16T12:25:17.028859200Z" level=info msg="Loading containers: done." Dec 16 12:25:17.040899 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck599284411-merged.mount: Deactivated successfully. Dec 16 12:25:17.053756 dockerd[1963]: time="2025-12-16T12:25:17.053712280Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:25:17.053925 dockerd[1963]: time="2025-12-16T12:25:17.053801840Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:25:17.053995 dockerd[1963]: time="2025-12-16T12:25:17.053976040Z" level=info msg="Initializing buildkit" Dec 16 12:25:17.079564 dockerd[1963]: time="2025-12-16T12:25:17.079509400Z" level=info msg="Completed buildkit initialization" Dec 16 12:25:17.086267 dockerd[1963]: time="2025-12-16T12:25:17.086203320Z" level=info msg="Daemon has completed initialization" Dec 16 12:25:17.086428 dockerd[1963]: time="2025-12-16T12:25:17.086287840Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:25:17.086574 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:25:17.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:17.919617 containerd[1706]: time="2025-12-16T12:25:17.919574880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:25:18.560061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380399003.mount: Deactivated successfully. Dec 16 12:25:19.035721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:25:19.037170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:25:19.209815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:19.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:19.225694 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:25:19.267525 kubelet[2244]: E1216 12:25:19.267467 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:25:19.270212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:25:19.270365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:25:19.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:25:19.270756 systemd[1]: kubelet.service: Consumed 151ms CPU time, 107.2M memory peak. Dec 16 12:25:19.284322 containerd[1706]: time="2025-12-16T12:25:19.284111400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:19.287265 containerd[1706]: time="2025-12-16T12:25:19.286940600Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22975171" Dec 16 12:25:19.288215 containerd[1706]: time="2025-12-16T12:25:19.288162880Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:19.291479 containerd[1706]: time="2025-12-16T12:25:19.291427840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:19.293365 containerd[1706]: time="2025-12-16T12:25:19.293313520Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.3736964s" Dec 16 12:25:19.293627 containerd[1706]: time="2025-12-16T12:25:19.293465040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 12:25:19.294285 containerd[1706]: time="2025-12-16T12:25:19.294261240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:25:19.699827 chronyd[1659]: Selected source PHC0 Dec 16 12:25:20.251213 containerd[1706]: time="2025-12-16T12:25:20.251162040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:20.253424 containerd[1706]: time="2025-12-16T12:25:20.253386044Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=0" Dec 16 12:25:20.255043 containerd[1706]: time="2025-12-16T12:25:20.254976252Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:20.258480 containerd[1706]: time="2025-12-16T12:25:20.258411990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:20.259704 containerd[1706]: time="2025-12-16T12:25:20.259673341Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 965.258097ms" Dec 16 12:25:20.259704 containerd[1706]: time="2025-12-16T12:25:20.259708831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 12:25:20.260235 containerd[1706]: time="2025-12-16T12:25:20.260201229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:25:21.195616 containerd[1706]: time="2025-12-16T12:25:21.195552164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:21.196562 containerd[1706]: time="2025-12-16T12:25:21.196498003Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Dec 16 12:25:21.197771 containerd[1706]: time="2025-12-16T12:25:21.197732184Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:21.201548 containerd[1706]: time="2025-12-16T12:25:21.201506963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:21.202831 containerd[1706]: time="2025-12-16T12:25:21.202786557Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 942.478511ms" Dec 16 12:25:21.202831 containerd[1706]: time="2025-12-16T12:25:21.202824301Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 12:25:21.203358 containerd[1706]: time="2025-12-16T12:25:21.203239565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:25:22.178613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3050611966.mount: Deactivated successfully. Dec 16 12:25:22.358578 containerd[1706]: time="2025-12-16T12:25:22.358032594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:22.359222 containerd[1706]: time="2025-12-16T12:25:22.359121347Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Dec 16 12:25:22.360038 containerd[1706]: time="2025-12-16T12:25:22.359992914Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:22.362543 containerd[1706]: time="2025-12-16T12:25:22.362502182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:22.363384 containerd[1706]: time="2025-12-16T12:25:22.363320404Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.160052412s" Dec 16 12:25:22.363384 containerd[1706]: time="2025-12-16T12:25:22.363377518Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 12:25:22.364072 containerd[1706]: time="2025-12-16T12:25:22.364043510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:25:23.061395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284535071.mount: Deactivated successfully. Dec 16 12:25:23.621796 containerd[1706]: time="2025-12-16T12:25:23.621728547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:23.622765 containerd[1706]: time="2025-12-16T12:25:23.622484384Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=0" Dec 16 12:25:23.623791 containerd[1706]: time="2025-12-16T12:25:23.623759901Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:23.626516 containerd[1706]: time="2025-12-16T12:25:23.626484573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:23.627495 containerd[1706]: time="2025-12-16T12:25:23.627457410Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.263377337s" Dec 16 12:25:23.627495 containerd[1706]: time="2025-12-16T12:25:23.627499210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 12:25:23.628409 containerd[1706]: time="2025-12-16T12:25:23.628374008Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:25:24.182244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401035406.mount: Deactivated successfully. Dec 16 12:25:24.190339 containerd[1706]: time="2025-12-16T12:25:24.190258932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:24.191303 containerd[1706]: time="2025-12-16T12:25:24.191229369Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 16 12:25:24.192490 containerd[1706]: time="2025-12-16T12:25:24.192445446Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:24.195462 containerd[1706]: time="2025-12-16T12:25:24.195407517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:24.195783 containerd[1706]: time="2025-12-16T12:25:24.195742677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 567.336509ms" Dec 16 12:25:24.195783 containerd[1706]: time="2025-12-16T12:25:24.195776916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 12:25:24.196253 containerd[1706]: time="2025-12-16T12:25:24.196173435Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:25:24.754979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328966736.mount: Deactivated successfully. Dec 16 12:25:26.449714 containerd[1706]: time="2025-12-16T12:25:26.449652925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:26.451383 containerd[1706]: time="2025-12-16T12:25:26.451338771Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=85821047" Dec 16 12:25:26.452939 containerd[1706]: time="2025-12-16T12:25:26.452860056Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:26.456396 containerd[1706]: time="2025-12-16T12:25:26.456365467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:26.457529 containerd[1706]: time="2025-12-16T12:25:26.457487711Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.261084916s" Dec 16 12:25:26.457529 containerd[1706]: time="2025-12-16T12:25:26.457521111Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 12:25:29.483019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:25:29.484471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:25:29.674996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:29.678526 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 12:25:29.678637 kernel: audit: type=1130 audit(1765887929.674:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:29.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:29.690933 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:25:29.729106 kubelet[2411]: E1216 12:25:29.729049 2411 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:25:29.731500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:25:29.731750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:25:29.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:25:29.732199 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.1M memory peak. Dec 16 12:25:29.736333 kernel: audit: type=1131 audit(1765887929.731:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:25:32.579649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:32.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.580142 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.1M memory peak. Dec 16 12:25:32.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.582268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:25:32.584866 kernel: audit: type=1130 audit(1765887932.579:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.584944 kernel: audit: type=1131 audit(1765887932.579:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.608488 systemd[1]: Reload requested from client PID 2426 ('systemctl') (unit session-7.scope)... Dec 16 12:25:32.608504 systemd[1]: Reloading... Dec 16 12:25:32.681324 zram_generator::config[2472]: No configuration found. Dec 16 12:25:32.870619 systemd[1]: Reloading finished in 261 ms. Dec 16 12:25:32.895000 audit: BPF prog-id=63 op=LOAD Dec 16 12:25:32.895000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:25:32.897809 kernel: audit: type=1334 audit(1765887932.895:299): prog-id=63 op=LOAD Dec 16 12:25:32.897860 kernel: audit: type=1334 audit(1765887932.895:300): prog-id=60 op=UNLOAD Dec 16 12:25:32.897881 kernel: audit: type=1334 audit(1765887932.895:301): prog-id=64 op=LOAD Dec 16 12:25:32.895000 audit: BPF prog-id=64 op=LOAD Dec 16 12:25:32.895000 audit: BPF prog-id=65 op=LOAD Dec 16 12:25:32.895000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:25:32.895000 audit: BPF prog-id=62 op=UNLOAD Dec 16 12:25:32.898000 audit: BPF prog-id=66 op=LOAD Dec 16 12:25:32.898000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:25:32.898000 audit: BPF prog-id=67 op=LOAD Dec 16 12:25:32.898000 audit: BPF prog-id=68 op=LOAD Dec 16 12:25:32.900370 kernel: audit: type=1334 audit(1765887932.895:302): prog-id=65 op=LOAD Dec 16 12:25:32.900408 kernel: audit: type=1334 audit(1765887932.895:303): prog-id=61 op=UNLOAD Dec 16 12:25:32.900426 kernel: audit: type=1334 audit(1765887932.895:304): prog-id=62 op=UNLOAD Dec 16 12:25:32.898000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:25:32.898000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:25:32.899000 audit: BPF prog-id=69 op=LOAD Dec 16 12:25:32.899000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:25:32.901000 audit: BPF prog-id=70 op=LOAD Dec 16 12:25:32.901000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:25:32.901000 audit: BPF prog-id=71 op=LOAD Dec 16 12:25:32.901000 audit: BPF prog-id=72 op=LOAD Dec 16 12:25:32.901000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:25:32.901000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:25:32.902000 audit: BPF prog-id=73 op=LOAD Dec 16 12:25:32.913000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:25:32.913000 audit: BPF prog-id=74 op=LOAD Dec 16 12:25:32.913000 audit: BPF prog-id=75 op=LOAD Dec 16 12:25:32.913000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:25:32.913000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:25:32.914000 audit: BPF prog-id=76 op=LOAD Dec 16 12:25:32.914000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:25:32.915000 audit: BPF prog-id=77 op=LOAD Dec 16 12:25:32.915000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:25:32.915000 audit: BPF prog-id=78 op=LOAD Dec 16 12:25:32.915000 audit: BPF prog-id=79 op=LOAD Dec 16 12:25:32.915000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:25:32.915000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:25:32.916000 audit: BPF prog-id=80 op=LOAD Dec 16 12:25:32.916000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:25:32.916000 audit: BPF prog-id=81 op=LOAD Dec 16 12:25:32.916000 audit: BPF prog-id=82 op=LOAD Dec 16 12:25:32.916000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:25:32.916000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:25:32.936022 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:25:32.936124 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:25:32.936446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:32.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:25:32.936504 systemd[1]: kubelet.service: Consumed 94ms CPU time, 95.1M memory peak. Dec 16 12:25:32.938180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:25:33.054777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:33.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:33.067620 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:25:33.103968 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:25:33.103968 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:25:33.104287 kubelet[2520]: I1216 12:25:33.104031 2520 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:25:34.055324 kubelet[2520]: I1216 12:25:34.054498 2520 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:25:34.055324 kubelet[2520]: I1216 12:25:34.054529 2520 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:25:34.055324 kubelet[2520]: I1216 12:25:34.054556 2520 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:25:34.055324 kubelet[2520]: I1216 12:25:34.054562 2520 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:25:34.055324 kubelet[2520]: I1216 12:25:34.054806 2520 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:25:34.064584 kubelet[2520]: E1216 12:25:34.064520 2520 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.21.226:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.21.226:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:25:34.065355 kubelet[2520]: I1216 12:25:34.065330 2520 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:25:34.070315 kubelet[2520]: I1216 12:25:34.068660 2520 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:25:34.071430 kubelet[2520]: I1216 12:25:34.071408 2520 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:25:34.071755 kubelet[2520]: I1216 12:25:34.071726 2520 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:25:34.071995 kubelet[2520]: I1216 12:25:34.071819 2520 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-7-179ea8c226","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:25:34.072133 kubelet[2520]: I1216 12:25:34.072119 2520 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:25:34.072189 kubelet[2520]: I1216 12:25:34.072182 2520 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:25:34.072376 kubelet[2520]: I1216 12:25:34.072359 2520 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:25:34.076151 kubelet[2520]: I1216 12:25:34.076126 2520 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:25:34.078676 kubelet[2520]: I1216 12:25:34.078651 2520 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:25:34.078778 kubelet[2520]: I1216 12:25:34.078767 2520 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:25:34.079449 kubelet[2520]: E1216 12:25:34.079404 2520 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.21.226:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-7-179ea8c226&limit=500&resourceVersion=0\": dial tcp 10.0.21.226:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:25:34.080329 kubelet[2520]: I1216 12:25:34.080250 2520 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:25:34.080329 kubelet[2520]: I1216 12:25:34.080276 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:25:34.080858 kubelet[2520]: E1216 12:25:34.080828 2520 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.21.226:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.21.226:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:25:34.083319 kubelet[2520]: I1216 12:25:34.083238 2520 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:25:34.084002 kubelet[2520]: I1216 12:25:34.083963 2520 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:25:34.084002 kubelet[2520]: I1216 12:25:34.084005 2520 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:25:34.084063 kubelet[2520]: W1216 12:25:34.084046 2520 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:25:34.086817 kubelet[2520]: I1216 12:25:34.086793 2520 server.go:1262] "Started kubelet" Dec 16 12:25:34.087537 kubelet[2520]: I1216 12:25:34.087370 2520 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:25:34.087537 kubelet[2520]: I1216 12:25:34.087451 2520 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:25:34.087809 kubelet[2520]: I1216 12:25:34.087787 2520 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:25:34.089458 kubelet[2520]: I1216 12:25:34.089430 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:25:34.094531 kubelet[2520]: I1216 12:25:34.094493 2520 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:25:34.095568 kubelet[2520]: I1216 12:25:34.095536 2520 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:25:34.095000 audit[2537]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:34.095000 audit[2537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffde7a5460 a2=0 a3=0 items=0 ppid=2520 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:25:34.096847 kubelet[2520]: I1216 12:25:34.096806 2520 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:25:34.096000 audit[2538]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:34.096000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee2d2e80 a2=0 a3=0 items=0 ppid=2520 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.096000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:25:34.099221 kubelet[2520]: E1216 12:25:34.099180 2520 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4515-1-0-7-179ea8c226\" not found" Dec 16 12:25:34.100262 kubelet[2520]: E1216 12:25:34.097956 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.21.226:6443/api/v1/namespaces/default/events\": dial tcp 10.0.21.226:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515-1-0-7-179ea8c226.1881b1b85063aa34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515-1-0-7-179ea8c226,UID:ci-4515-1-0-7-179ea8c226,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-7-179ea8c226,},FirstTimestamp:2025-12-16 12:25:34.08675282 +0000 UTC m=+1.016134596,LastTimestamp:2025-12-16 12:25:34.08675282 +0000 UTC m=+1.016134596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-7-179ea8c226,}" Dec 16 12:25:34.100408 kubelet[2520]: E1216 12:25:34.100326 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.21.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-7-179ea8c226?timeout=10s\": dial tcp 10.0.21.226:6443: connect: connection refused" interval="200ms" Dec 16 12:25:34.100730 kubelet[2520]: I1216 12:25:34.100701 2520 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:25:34.100835 kubelet[2520]: I1216 12:25:34.100816 2520 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:25:34.101373 kubelet[2520]: I1216 12:25:34.101355 2520 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:25:34.101915 kubelet[2520]: I1216 12:25:34.101460 2520 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:25:34.102092 kubelet[2520]: I1216 12:25:34.102067 2520 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:25:34.102403 kubelet[2520]: I1216 12:25:34.102337 2520 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:25:34.102699 kubelet[2520]: E1216 12:25:34.102632 2520 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.21.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.21.226:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:25:34.102000 audit[2540]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:34.102000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc2c44490 a2=0 a3=0 items=0 ppid=2520 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:25:34.104431 kubelet[2520]: E1216 12:25:34.104401 2520 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:25:34.104000 audit[2542]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:34.104000 audit[2542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffca537770 a2=0 a3=0 items=0 ppid=2520 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:25:34.113000 audit[2546]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:34.113000 audit[2546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff506bd20 a2=0 a3=0 items=0 ppid=2520 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 12:25:34.114158 kubelet[2520]: I1216 12:25:34.114099 2520 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:25:34.114000 audit[2547]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:34.114000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff8b598b0 a2=0 a3=0 items=0 ppid=2520 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.114000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:25:34.115428 kubelet[2520]: I1216 12:25:34.115397 2520 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:25:34.115428 kubelet[2520]: I1216 12:25:34.115423 2520 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:25:34.115489 kubelet[2520]: I1216 12:25:34.115457 2520 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:25:34.115527 kubelet[2520]: E1216 12:25:34.115498 2520 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:25:34.115000 audit[2548]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:34.115000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1e5f310 a2=0 a3=0 items=0 ppid=2520 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:25:34.116000 audit[2549]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:34.116000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffffc3fdc0 a2=0 a3=0 items=0 ppid=2520 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.116000 audit[2550]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:34.116000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff41c9f40 a2=0 a3=0 items=0 ppid=2520 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:25:34.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:25:34.117770 kubelet[2520]: E1216 12:25:34.117727 2520 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.21.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.21.226:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:25:34.117000 audit[2551]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:34.117000 audit[2551]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc8068330 a2=0 a3=0 items=0 ppid=2520 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.117000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:25:34.117000 audit[2552]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:34.117000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff650cdd0 a2=0 a3=0 items=0 ppid=2520 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:25:34.119086 kubelet[2520]: I1216 12:25:34.119070 2520 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:25:34.119179 kubelet[2520]: I1216 12:25:34.119169 2520 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:25:34.118000 audit[2556]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:34.118000 audit[2556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc466bba0 a2=0 a3=0 items=0 ppid=2520 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:34.118000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:25:34.119414 kubelet[2520]: I1216 12:25:34.119262 2520 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:25:34.122065 kubelet[2520]: I1216 12:25:34.122043 2520 policy_none.go:49] "None policy: Start" Dec 16 12:25:34.122157 kubelet[2520]: I1216 12:25:34.122148 2520 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:25:34.122215 kubelet[2520]: I1216 12:25:34.122197 2520 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:25:34.124345 kubelet[2520]: I1216 12:25:34.123753 2520 policy_none.go:47] "Start" Dec 16 12:25:34.128141 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:25:34.141655 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:25:34.144690 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:25:34.164565 kubelet[2520]: E1216 12:25:34.164520 2520 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:25:34.164814 kubelet[2520]: I1216 12:25:34.164790 2520 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:25:34.164955 kubelet[2520]: I1216 12:25:34.164813 2520 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:25:34.165544 kubelet[2520]: I1216 12:25:34.165379 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:25:34.166755 kubelet[2520]: E1216 12:25:34.166720 2520 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:25:34.166841 kubelet[2520]: E1216 12:25:34.166778 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515-1-0-7-179ea8c226\" not found" Dec 16 12:25:34.227095 systemd[1]: Created slice kubepods-burstable-pod3c8c5f44ae6e325b3b3b9c5f45f13a0d.slice - libcontainer container kubepods-burstable-pod3c8c5f44ae6e325b3b3b9c5f45f13a0d.slice. Dec 16 12:25:34.247553 kubelet[2520]: E1216 12:25:34.247502 2520 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-7-179ea8c226\" not found" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.250140 systemd[1]: Created slice kubepods-burstable-podc600c3a7d9ba9ec4436dc55d2d5f3a86.slice - libcontainer container kubepods-burstable-podc600c3a7d9ba9ec4436dc55d2d5f3a86.slice. Dec 16 12:25:34.262904 kubelet[2520]: E1216 12:25:34.262864 2520 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-7-179ea8c226\" not found" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.265367 systemd[1]: Created slice kubepods-burstable-pod57ec2e6a16ed56b25e8732f92da7f348.slice - libcontainer container kubepods-burstable-pod57ec2e6a16ed56b25e8732f92da7f348.slice. Dec 16 12:25:34.267272 kubelet[2520]: I1216 12:25:34.267248 2520 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.267479 kubelet[2520]: E1216 12:25:34.267313 2520 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-7-179ea8c226\" not found" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.268116 kubelet[2520]: E1216 12:25:34.268086 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.21.226:6443/api/v1/nodes\": dial tcp 10.0.21.226:6443: connect: connection refused" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.301961 kubelet[2520]: E1216 12:25:34.301878 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.21.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-7-179ea8c226?timeout=10s\": dial tcp 10.0.21.226:6443: connect: connection refused" interval="400ms" Dec 16 12:25:34.303923 kubelet[2520]: I1216 12:25:34.303889 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.303998 kubelet[2520]: I1216 12:25:34.303931 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.303998 kubelet[2520]: I1216 12:25:34.303951 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57ec2e6a16ed56b25e8732f92da7f348-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-7-179ea8c226\" (UID: \"57ec2e6a16ed56b25e8732f92da7f348\") " pod="kube-system/kube-scheduler-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.303998 kubelet[2520]: I1216 12:25:34.303968 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c8c5f44ae6e325b3b3b9c5f45f13a0d-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-7-179ea8c226\" (UID: \"3c8c5f44ae6e325b3b3b9c5f45f13a0d\") " pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.303998 kubelet[2520]: I1216 12:25:34.303982 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c8c5f44ae6e325b3b3b9c5f45f13a0d-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-7-179ea8c226\" (UID: \"3c8c5f44ae6e325b3b3b9c5f45f13a0d\") " pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.303998 kubelet[2520]: I1216 12:25:34.303996 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c8c5f44ae6e325b3b3b9c5f45f13a0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-7-179ea8c226\" (UID: \"3c8c5f44ae6e325b3b3b9c5f45f13a0d\") " pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.304109 kubelet[2520]: I1216 12:25:34.304009 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.304109 kubelet[2520]: I1216 12:25:34.304023 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.304109 kubelet[2520]: I1216 12:25:34.304037 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.470382 kubelet[2520]: I1216 12:25:34.470186 2520 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.470746 kubelet[2520]: E1216 12:25:34.470712 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.21.226:6443/api/v1/nodes\": dial tcp 10.0.21.226:6443: connect: connection refused" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.552748 containerd[1706]: time="2025-12-16T12:25:34.552694802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-7-179ea8c226,Uid:3c8c5f44ae6e325b3b3b9c5f45f13a0d,Namespace:kube-system,Attempt:0,}" Dec 16 12:25:34.567088 containerd[1706]: time="2025-12-16T12:25:34.567039888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-7-179ea8c226,Uid:c600c3a7d9ba9ec4436dc55d2d5f3a86,Namespace:kube-system,Attempt:0,}" Dec 16 12:25:34.570496 containerd[1706]: time="2025-12-16T12:25:34.570414619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-7-179ea8c226,Uid:57ec2e6a16ed56b25e8732f92da7f348,Namespace:kube-system,Attempt:0,}" Dec 16 12:25:34.703886 kubelet[2520]: E1216 12:25:34.703825 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.21.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-7-179ea8c226?timeout=10s\": dial tcp 10.0.21.226:6443: connect: connection refused" interval="800ms" Dec 16 12:25:34.872471 kubelet[2520]: I1216 12:25:34.872429 2520 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.872899 kubelet[2520]: E1216 12:25:34.872838 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.21.226:6443/api/v1/nodes\": dial tcp 10.0.21.226:6443: connect: connection refused" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:34.967011 kubelet[2520]: E1216 12:25:34.966897 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.21.226:6443/api/v1/namespaces/default/events\": dial tcp 10.0.21.226:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515-1-0-7-179ea8c226.1881b1b85063aa34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515-1-0-7-179ea8c226,UID:ci-4515-1-0-7-179ea8c226,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-7-179ea8c226,},FirstTimestamp:2025-12-16 12:25:34.08675282 +0000 UTC m=+1.016134596,LastTimestamp:2025-12-16 12:25:34.08675282 +0000 UTC m=+1.016134596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-7-179ea8c226,}" Dec 16 12:25:35.053610 kubelet[2520]: E1216 12:25:35.053562 2520 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.21.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.21.226:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:25:35.139054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4047860914.mount: Deactivated successfully. Dec 16 12:25:35.153742 containerd[1706]: time="2025-12-16T12:25:35.153692259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:25:35.155400 containerd[1706]: time="2025-12-16T12:25:35.155348624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:25:35.159533 containerd[1706]: time="2025-12-16T12:25:35.159496677Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:25:35.162432 containerd[1706]: time="2025-12-16T12:25:35.162367647Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:25:35.163909 containerd[1706]: time="2025-12-16T12:25:35.163885051Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:25:35.165185 containerd[1706]: time="2025-12-16T12:25:35.165116935Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:25:35.166358 containerd[1706]: time="2025-12-16T12:25:35.166311659Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:25:35.168159 containerd[1706]: time="2025-12-16T12:25:35.168131385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:25:35.168814 containerd[1706]: time="2025-12-16T12:25:35.168790107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 611.046929ms" Dec 16 12:25:35.171382 containerd[1706]: time="2025-12-16T12:25:35.171351236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 601.671059ms" Dec 16 12:25:35.171943 containerd[1706]: time="2025-12-16T12:25:35.171888877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 598.92413ms" Dec 16 12:25:35.209046 containerd[1706]: time="2025-12-16T12:25:35.208974757Z" level=info msg="connecting to shim ea30b3df5d386def62836e0dcf5e4b28db34f56df90cd815a6dd87f754f4fae1" address="unix:///run/containerd/s/b43e5924b8ef4128a454ae2913c45275e7694e92b0ddf1df7a50c63412eac285" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:35.221573 containerd[1706]: time="2025-12-16T12:25:35.221529597Z" level=info msg="connecting to shim 409e2e7052fed2ca7a2dd5ac6a218b07aecebc97ad8f60b6a562bda69c05095a" address="unix:///run/containerd/s/d99be28a8ab9ae79e2fca8638d1f001846516b3af76926f48ca62b4f71a583fb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:35.224419 containerd[1706]: time="2025-12-16T12:25:35.224354886Z" level=info msg="connecting to shim 8e718408ecbab92160c0ea662d6e04d286bf859bda6fcda3e81c81a3a34c99df" address="unix:///run/containerd/s/e4e7fd41c1512e0d5ed56545a15a766381c48cf2dbc057c1469d2eb03d2051de" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:35.234164 kubelet[2520]: E1216 12:25:35.234125 2520 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.21.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.21.226:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:25:35.237510 systemd[1]: Started cri-containerd-ea30b3df5d386def62836e0dcf5e4b28db34f56df90cd815a6dd87f754f4fae1.scope - libcontainer container ea30b3df5d386def62836e0dcf5e4b28db34f56df90cd815a6dd87f754f4fae1. Dec 16 12:25:35.244640 systemd[1]: Started cri-containerd-409e2e7052fed2ca7a2dd5ac6a218b07aecebc97ad8f60b6a562bda69c05095a.scope - libcontainer container 409e2e7052fed2ca7a2dd5ac6a218b07aecebc97ad8f60b6a562bda69c05095a. Dec 16 12:25:35.249000 audit: BPF prog-id=83 op=LOAD Dec 16 12:25:35.250432 kernel: kauditd_printk_skb: 72 callbacks suppressed Dec 16 12:25:35.250493 kernel: audit: type=1334 audit(1765887935.249:353): prog-id=83 op=LOAD Dec 16 12:25:35.251000 audit: BPF prog-id=84 op=LOAD Dec 16 12:25:35.251000 audit[2592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.255688 kernel: audit: type=1334 audit(1765887935.251:354): prog-id=84 op=LOAD Dec 16 12:25:35.255795 kernel: audit: type=1300 audit(1765887935.251:354): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.259586 kernel: audit: type=1327 audit(1765887935.251:354): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.259828 kernel: audit: type=1334 audit(1765887935.251:355): prog-id=84 op=UNLOAD Dec 16 12:25:35.251000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:25:35.251000 audit[2592]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.263901 kernel: audit: type=1300 audit(1765887935.251:355): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.267348 kernel: audit: type=1327 audit(1765887935.251:355): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.268137 kernel: audit: type=1334 audit(1765887935.255:356): prog-id=85 op=LOAD Dec 16 12:25:35.268165 kernel: audit: type=1300 audit(1765887935.255:356): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.255000 audit: BPF prog-id=85 op=LOAD Dec 16 12:25:35.255000 audit[2592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.272103 kernel: audit: type=1327 audit(1765887935.255:356): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.255000 audit: BPF prog-id=86 op=LOAD Dec 16 12:25:35.255000 audit[2592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.255000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:25:35.255000 audit[2592]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.255000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:25:35.255000 audit[2592]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.255000 audit: BPF prog-id=87 op=LOAD Dec 16 12:25:35.255000 audit[2592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2568 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561333062336466356433383664656636323833366530646366356534 Dec 16 12:25:35.266000 audit: BPF prog-id=88 op=LOAD Dec 16 12:25:35.276694 systemd[1]: Started cri-containerd-8e718408ecbab92160c0ea662d6e04d286bf859bda6fcda3e81c81a3a34c99df.scope - libcontainer container 8e718408ecbab92160c0ea662d6e04d286bf859bda6fcda3e81c81a3a34c99df. Dec 16 12:25:35.277000 audit: BPF prog-id=89 op=LOAD Dec 16 12:25:35.277000 audit[2626]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2596 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396532653730353266656432636137613264643561633661323138 Dec 16 12:25:35.278000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:25:35.278000 audit[2626]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396532653730353266656432636137613264643561633661323138 Dec 16 12:25:35.278000 audit: BPF prog-id=90 op=LOAD Dec 16 12:25:35.278000 audit[2626]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2596 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396532653730353266656432636137613264643561633661323138 Dec 16 12:25:35.278000 audit: BPF prog-id=91 op=LOAD Dec 16 12:25:35.278000 audit[2626]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2596 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396532653730353266656432636137613264643561633661323138 Dec 16 12:25:35.278000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:25:35.278000 audit[2626]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396532653730353266656432636137613264643561633661323138 Dec 16 12:25:35.278000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:25:35.278000 audit[2626]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396532653730353266656432636137613264643561633661323138 Dec 16 12:25:35.278000 audit: BPF prog-id=92 op=LOAD Dec 16 12:25:35.278000 audit[2626]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2596 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396532653730353266656432636137613264643561633661323138 Dec 16 12:25:35.290696 containerd[1706]: time="2025-12-16T12:25:35.290653700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-7-179ea8c226,Uid:3c8c5f44ae6e325b3b3b9c5f45f13a0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea30b3df5d386def62836e0dcf5e4b28db34f56df90cd815a6dd87f754f4fae1\"" Dec 16 12:25:35.292000 audit: BPF prog-id=93 op=LOAD Dec 16 12:25:35.294000 audit: BPF prog-id=94 op=LOAD Dec 16 12:25:35.294000 audit[2643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400018c180 a2=98 a3=0 items=0 ppid=2609 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373138343038656362616239323136306330656136363264366530 Dec 16 12:25:35.294000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:25:35.294000 audit[2643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373138343038656362616239323136306330656136363264366530 Dec 16 12:25:35.294000 audit: BPF prog-id=95 op=LOAD Dec 16 12:25:35.294000 audit[2643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=2609 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373138343038656362616239323136306330656136363264366530 Dec 16 12:25:35.294000 audit: BPF prog-id=96 op=LOAD Dec 16 12:25:35.294000 audit[2643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=2609 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373138343038656362616239323136306330656136363264366530 Dec 16 12:25:35.295000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:25:35.295000 audit[2643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373138343038656362616239323136306330656136363264366530 Dec 16 12:25:35.295000 audit: BPF prog-id=95 op=UNLOAD Dec 16 12:25:35.295000 audit[2643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373138343038656362616239323136306330656136363264366530 Dec 16 12:25:35.295000 audit: BPF prog-id=97 op=LOAD Dec 16 12:25:35.295000 audit[2643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400018c648 a2=98 a3=0 items=0 ppid=2609 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373138343038656362616239323136306330656136363264366530 Dec 16 12:25:35.303942 containerd[1706]: time="2025-12-16T12:25:35.303903103Z" level=info msg="CreateContainer within sandbox \"ea30b3df5d386def62836e0dcf5e4b28db34f56df90cd815a6dd87f754f4fae1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:25:35.317594 containerd[1706]: time="2025-12-16T12:25:35.317533827Z" level=info msg="Container a6131a585762411ae9bcc143c83bb198f7f351d08015838499990c3e4f050778: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:35.326282 containerd[1706]: time="2025-12-16T12:25:35.325913494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-7-179ea8c226,Uid:57ec2e6a16ed56b25e8732f92da7f348,Namespace:kube-system,Attempt:0,} returns sandbox id \"409e2e7052fed2ca7a2dd5ac6a218b07aecebc97ad8f60b6a562bda69c05095a\"" Dec 16 12:25:35.327254 containerd[1706]: time="2025-12-16T12:25:35.327221018Z" level=info msg="CreateContainer within sandbox \"ea30b3df5d386def62836e0dcf5e4b28db34f56df90cd815a6dd87f754f4fae1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6131a585762411ae9bcc143c83bb198f7f351d08015838499990c3e4f050778\"" Dec 16 12:25:35.328034 containerd[1706]: time="2025-12-16T12:25:35.327951340Z" level=info msg="StartContainer for \"a6131a585762411ae9bcc143c83bb198f7f351d08015838499990c3e4f050778\"" Dec 16 12:25:35.329303 containerd[1706]: time="2025-12-16T12:25:35.329182904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-7-179ea8c226,Uid:c600c3a7d9ba9ec4436dc55d2d5f3a86,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e718408ecbab92160c0ea662d6e04d286bf859bda6fcda3e81c81a3a34c99df\"" Dec 16 12:25:35.333309 containerd[1706]: time="2025-12-16T12:25:35.333259157Z" level=info msg="connecting to shim a6131a585762411ae9bcc143c83bb198f7f351d08015838499990c3e4f050778" address="unix:///run/containerd/s/b43e5924b8ef4128a454ae2913c45275e7694e92b0ddf1df7a50c63412eac285" protocol=ttrpc version=3 Dec 16 12:25:35.334180 containerd[1706]: time="2025-12-16T12:25:35.334133560Z" level=info msg="CreateContainer within sandbox \"409e2e7052fed2ca7a2dd5ac6a218b07aecebc97ad8f60b6a562bda69c05095a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:25:35.335743 containerd[1706]: time="2025-12-16T12:25:35.335715565Z" level=info msg="CreateContainer within sandbox \"8e718408ecbab92160c0ea662d6e04d286bf859bda6fcda3e81c81a3a34c99df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:25:35.346930 containerd[1706]: time="2025-12-16T12:25:35.346887761Z" level=info msg="Container 10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:35.351538 systemd[1]: Started cri-containerd-a6131a585762411ae9bcc143c83bb198f7f351d08015838499990c3e4f050778.scope - libcontainer container a6131a585762411ae9bcc143c83bb198f7f351d08015838499990c3e4f050778. Dec 16 12:25:35.360269 containerd[1706]: time="2025-12-16T12:25:35.360139364Z" level=info msg="CreateContainer within sandbox \"409e2e7052fed2ca7a2dd5ac6a218b07aecebc97ad8f60b6a562bda69c05095a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178\"" Dec 16 12:25:35.361504 containerd[1706]: time="2025-12-16T12:25:35.361469168Z" level=info msg="StartContainer for \"10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178\"" Dec 16 12:25:35.362548 containerd[1706]: time="2025-12-16T12:25:35.362520732Z" level=info msg="Container c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:35.363194 containerd[1706]: time="2025-12-16T12:25:35.363168694Z" level=info msg="connecting to shim 10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178" address="unix:///run/containerd/s/d99be28a8ab9ae79e2fca8638d1f001846516b3af76926f48ca62b4f71a583fb" protocol=ttrpc version=3 Dec 16 12:25:35.363000 audit: BPF prog-id=98 op=LOAD Dec 16 12:25:35.364000 audit: BPF prog-id=99 op=LOAD Dec 16 12:25:35.364000 audit[2699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2568 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313331613538353736323431316165396263633134336338336262 Dec 16 12:25:35.365000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:25:35.365000 audit[2699]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2568 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313331613538353736323431316165396263633134336338336262 Dec 16 12:25:35.365000 audit: BPF prog-id=100 op=LOAD Dec 16 12:25:35.365000 audit[2699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2568 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313331613538353736323431316165396263633134336338336262 Dec 16 12:25:35.365000 audit: BPF prog-id=101 op=LOAD Dec 16 12:25:35.365000 audit[2699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2568 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313331613538353736323431316165396263633134336338336262 Dec 16 12:25:35.365000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:25:35.365000 audit[2699]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2568 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313331613538353736323431316165396263633134336338336262 Dec 16 12:25:35.365000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:25:35.365000 audit[2699]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2568 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313331613538353736323431316165396263633134336338336262 Dec 16 12:25:35.365000 audit: BPF prog-id=102 op=LOAD Dec 16 12:25:35.365000 audit[2699]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2568 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136313331613538353736323431316165396263633134336338336262 Dec 16 12:25:35.375606 containerd[1706]: time="2025-12-16T12:25:35.375386493Z" level=info msg="CreateContainer within sandbox \"8e718408ecbab92160c0ea662d6e04d286bf859bda6fcda3e81c81a3a34c99df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10\"" Dec 16 12:25:35.376037 containerd[1706]: time="2025-12-16T12:25:35.375969535Z" level=info msg="StartContainer for \"c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10\"" Dec 16 12:25:35.378242 containerd[1706]: time="2025-12-16T12:25:35.378181662Z" level=info msg="connecting to shim c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10" address="unix:///run/containerd/s/e4e7fd41c1512e0d5ed56545a15a766381c48cf2dbc057c1469d2eb03d2051de" protocol=ttrpc version=3 Dec 16 12:25:35.387503 systemd[1]: Started cri-containerd-10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178.scope - libcontainer container 10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178. Dec 16 12:25:35.400728 containerd[1706]: time="2025-12-16T12:25:35.400616334Z" level=info msg="StartContainer for \"a6131a585762411ae9bcc143c83bb198f7f351d08015838499990c3e4f050778\" returns successfully" Dec 16 12:25:35.407534 systemd[1]: Started cri-containerd-c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10.scope - libcontainer container c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10. Dec 16 12:25:35.411000 audit: BPF prog-id=103 op=LOAD Dec 16 12:25:35.412000 audit: BPF prog-id=104 op=LOAD Dec 16 12:25:35.412000 audit[2722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2596 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130626632363765643162616266393839333934353635643563616238 Dec 16 12:25:35.412000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:25:35.412000 audit[2722]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130626632363765643162616266393839333934353635643563616238 Dec 16 12:25:35.412000 audit: BPF prog-id=105 op=LOAD Dec 16 12:25:35.412000 audit[2722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2596 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130626632363765643162616266393839333934353635643563616238 Dec 16 12:25:35.413000 audit: BPF prog-id=106 op=LOAD Dec 16 12:25:35.413000 audit[2722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2596 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130626632363765643162616266393839333934353635643563616238 Dec 16 12:25:35.413000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:25:35.413000 audit[2722]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130626632363765643162616266393839333934353635643563616238 Dec 16 12:25:35.413000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:25:35.413000 audit[2722]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130626632363765643162616266393839333934353635643563616238 Dec 16 12:25:35.413000 audit: BPF prog-id=107 op=LOAD Dec 16 12:25:35.413000 audit[2722]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2596 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130626632363765643162616266393839333934353635643563616238 Dec 16 12:25:35.421062 kubelet[2520]: E1216 12:25:35.421025 2520 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.21.226:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.21.226:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:25:35.422000 audit: BPF prog-id=108 op=LOAD Dec 16 12:25:35.422000 audit: BPF prog-id=109 op=LOAD Dec 16 12:25:35.422000 audit[2733]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2609 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633262623332653961396563636532333862636566383532303063 Dec 16 12:25:35.422000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:25:35.422000 audit[2733]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633262623332653961396563636532333862636566383532303063 Dec 16 12:25:35.422000 audit: BPF prog-id=110 op=LOAD Dec 16 12:25:35.422000 audit[2733]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2609 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633262623332653961396563636532333862636566383532303063 Dec 16 12:25:35.423000 audit: BPF prog-id=111 op=LOAD Dec 16 12:25:35.423000 audit[2733]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2609 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633262623332653961396563636532333862636566383532303063 Dec 16 12:25:35.423000 audit: BPF prog-id=111 op=UNLOAD Dec 16 12:25:35.423000 audit[2733]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633262623332653961396563636532333862636566383532303063 Dec 16 12:25:35.423000 audit: BPF prog-id=110 op=UNLOAD Dec 16 12:25:35.423000 audit[2733]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633262623332653961396563636532333862636566383532303063 Dec 16 12:25:35.423000 audit: BPF prog-id=112 op=LOAD Dec 16 12:25:35.423000 audit[2733]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2609 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:35.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336633262623332653961396563636532333862636566383532303063 Dec 16 12:25:35.455871 containerd[1706]: time="2025-12-16T12:25:35.455812552Z" level=info msg="StartContainer for \"10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178\" returns successfully" Dec 16 12:25:35.464176 containerd[1706]: time="2025-12-16T12:25:35.464123579Z" level=info msg="StartContainer for \"c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10\" returns successfully" Dec 16 12:25:35.676373 kubelet[2520]: I1216 12:25:35.674320 2520 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:36.126077 kubelet[2520]: E1216 12:25:36.126046 2520 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-7-179ea8c226\" not found" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:36.129278 kubelet[2520]: E1216 12:25:36.129217 2520 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-7-179ea8c226\" not found" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:36.131551 kubelet[2520]: E1216 12:25:36.131515 2520 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-7-179ea8c226\" not found" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.136712 kubelet[2520]: E1216 12:25:37.136440 2520 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-7-179ea8c226\" not found" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.136712 kubelet[2520]: E1216 12:25:37.136573 2520 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-7-179ea8c226\" not found" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.402107 kubelet[2520]: E1216 12:25:37.401992 2520 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515-1-0-7-179ea8c226\" not found" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.586261 kubelet[2520]: I1216 12:25:37.585904 2520 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.586261 kubelet[2520]: E1216 12:25:37.585945 2520 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-7-179ea8c226\": node \"ci-4515-1-0-7-179ea8c226\" not found" Dec 16 12:25:37.600130 kubelet[2520]: I1216 12:25:37.600083 2520 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.606380 kubelet[2520]: E1216 12:25:37.606338 2520 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-7-179ea8c226\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.606380 kubelet[2520]: I1216 12:25:37.606378 2520 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.609041 kubelet[2520]: E1216 12:25:37.609009 2520 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.609041 kubelet[2520]: I1216 12:25:37.609042 2520 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:37.610691 kubelet[2520]: E1216 12:25:37.610665 2520 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-7-179ea8c226\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:38.082540 kubelet[2520]: I1216 12:25:38.082439 2520 apiserver.go:52] "Watching apiserver" Dec 16 12:25:38.102389 kubelet[2520]: I1216 12:25:38.102283 2520 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:25:38.137272 kubelet[2520]: I1216 12:25:38.137229 2520 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:38.139525 kubelet[2520]: E1216 12:25:38.139480 2520 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-7-179ea8c226\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:38.459834 kubelet[2520]: I1216 12:25:38.459513 2520 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:38.644850 kubelet[2520]: I1216 12:25:38.644814 2520 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:39.510978 systemd[1]: Reload requested from client PID 2810 ('systemctl') (unit session-7.scope)... Dec 16 12:25:39.510995 systemd[1]: Reloading... Dec 16 12:25:39.587323 zram_generator::config[2859]: No configuration found. Dec 16 12:25:39.767643 systemd[1]: Reloading finished in 256 ms. Dec 16 12:25:39.793879 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:25:39.810639 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:25:39.811064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:39.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:39.811352 systemd[1]: kubelet.service: Consumed 1.401s CPU time, 124.3M memory peak. Dec 16 12:25:39.813400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:25:39.813000 audit: BPF prog-id=113 op=LOAD Dec 16 12:25:39.813000 audit: BPF prog-id=114 op=LOAD Dec 16 12:25:39.813000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:25:39.813000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:25:39.814000 audit: BPF prog-id=115 op=LOAD Dec 16 12:25:39.814000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:25:39.816000 audit: BPF prog-id=116 op=LOAD Dec 16 12:25:39.816000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:25:39.816000 audit: BPF prog-id=117 op=LOAD Dec 16 12:25:39.816000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:25:39.816000 audit: BPF prog-id=118 op=LOAD Dec 16 12:25:39.816000 audit: BPF prog-id=119 op=LOAD Dec 16 12:25:39.816000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:25:39.816000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:25:39.817000 audit: BPF prog-id=120 op=LOAD Dec 16 12:25:39.828000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:25:39.829000 audit: BPF prog-id=121 op=LOAD Dec 16 12:25:39.829000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:25:39.829000 audit: BPF prog-id=122 op=LOAD Dec 16 12:25:39.829000 audit: BPF prog-id=123 op=LOAD Dec 16 12:25:39.829000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:25:39.829000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:25:39.830000 audit: BPF prog-id=124 op=LOAD Dec 16 12:25:39.830000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:25:39.830000 audit: BPF prog-id=125 op=LOAD Dec 16 12:25:39.830000 audit: BPF prog-id=126 op=LOAD Dec 16 12:25:39.830000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:25:39.830000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:25:39.831000 audit: BPF prog-id=127 op=LOAD Dec 16 12:25:39.831000 audit: BPF prog-id=63 op=UNLOAD Dec 16 12:25:39.831000 audit: BPF prog-id=128 op=LOAD Dec 16 12:25:39.831000 audit: BPF prog-id=129 op=LOAD Dec 16 12:25:39.831000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:25:39.831000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:25:39.833000 audit: BPF prog-id=130 op=LOAD Dec 16 12:25:39.833000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:25:39.833000 audit: BPF prog-id=131 op=LOAD Dec 16 12:25:39.833000 audit: BPF prog-id=132 op=LOAD Dec 16 12:25:39.833000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:25:39.833000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:25:39.958267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:39.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:39.962197 (kubelet)[2901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:25:40.000675 kubelet[2901]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:25:40.000675 kubelet[2901]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:25:40.001016 kubelet[2901]: I1216 12:25:40.000860 2901 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:25:40.008218 kubelet[2901]: I1216 12:25:40.008179 2901 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:25:40.008218 kubelet[2901]: I1216 12:25:40.008212 2901 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:25:40.008414 kubelet[2901]: I1216 12:25:40.008241 2901 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:25:40.008414 kubelet[2901]: I1216 12:25:40.008247 2901 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:25:40.008497 kubelet[2901]: I1216 12:25:40.008479 2901 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:25:40.009944 kubelet[2901]: I1216 12:25:40.009906 2901 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:25:40.013585 kubelet[2901]: I1216 12:25:40.013553 2901 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:25:40.018559 kubelet[2901]: I1216 12:25:40.018457 2901 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:25:40.021386 kubelet[2901]: I1216 12:25:40.021351 2901 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:25:40.021582 kubelet[2901]: I1216 12:25:40.021549 2901 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:25:40.021735 kubelet[2901]: I1216 12:25:40.021580 2901 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-7-179ea8c226","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:25:40.021735 kubelet[2901]: I1216 12:25:40.021735 2901 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:25:40.021883 kubelet[2901]: I1216 12:25:40.021744 2901 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:25:40.021883 kubelet[2901]: I1216 12:25:40.021770 2901 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:25:40.022729 kubelet[2901]: I1216 12:25:40.022710 2901 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:25:40.022916 kubelet[2901]: I1216 12:25:40.022898 2901 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:25:40.023005 kubelet[2901]: I1216 12:25:40.022923 2901 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:25:40.023005 kubelet[2901]: I1216 12:25:40.022960 2901 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:25:40.023005 kubelet[2901]: I1216 12:25:40.022974 2901 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:25:40.026509 kubelet[2901]: I1216 12:25:40.026482 2901 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:25:40.028305 kubelet[2901]: I1216 12:25:40.028204 2901 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:25:40.028305 kubelet[2901]: I1216 12:25:40.028253 2901 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:25:40.032580 kubelet[2901]: I1216 12:25:40.032557 2901 server.go:1262] "Started kubelet" Dec 16 12:25:40.032956 kubelet[2901]: I1216 12:25:40.032914 2901 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:25:40.033176 kubelet[2901]: I1216 12:25:40.033119 2901 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:25:40.033269 kubelet[2901]: I1216 12:25:40.033256 2901 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:25:40.033570 kubelet[2901]: I1216 12:25:40.033550 2901 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:25:40.034705 kubelet[2901]: I1216 12:25:40.033857 2901 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:25:40.035664 kubelet[2901]: I1216 12:25:40.035635 2901 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:25:40.046506 kubelet[2901]: I1216 12:25:40.045921 2901 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:25:40.048803 kubelet[2901]: I1216 12:25:40.048770 2901 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:25:40.049459 kubelet[2901]: I1216 12:25:40.049426 2901 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:25:40.049702 kubelet[2901]: I1216 12:25:40.049590 2901 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:25:40.050336 kubelet[2901]: E1216 12:25:40.050193 2901 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:25:40.050924 kubelet[2901]: I1216 12:25:40.050809 2901 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:25:40.051299 kubelet[2901]: I1216 12:25:40.050927 2901 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:25:40.053156 kubelet[2901]: I1216 12:25:40.053005 2901 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:25:40.053872 kubelet[2901]: I1216 12:25:40.053816 2901 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:25:40.055139 kubelet[2901]: I1216 12:25:40.055106 2901 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:25:40.055139 kubelet[2901]: I1216 12:25:40.055136 2901 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:25:40.055240 kubelet[2901]: I1216 12:25:40.055160 2901 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:25:40.055240 kubelet[2901]: E1216 12:25:40.055212 2901 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:25:40.087832 kubelet[2901]: I1216 12:25:40.087578 2901 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:25:40.087832 kubelet[2901]: I1216 12:25:40.087599 2901 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:25:40.087832 kubelet[2901]: I1216 12:25:40.087621 2901 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:25:40.087832 kubelet[2901]: I1216 12:25:40.087756 2901 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:25:40.087832 kubelet[2901]: I1216 12:25:40.087766 2901 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:25:40.087832 kubelet[2901]: I1216 12:25:40.087781 2901 policy_none.go:49] "None policy: Start" Dec 16 12:25:40.087832 kubelet[2901]: I1216 12:25:40.087789 2901 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:25:40.088156 kubelet[2901]: I1216 12:25:40.088131 2901 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:25:40.088354 kubelet[2901]: I1216 12:25:40.088340 2901 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 12:25:40.088428 kubelet[2901]: I1216 12:25:40.088419 2901 policy_none.go:47] "Start" Dec 16 12:25:40.093984 kubelet[2901]: E1216 12:25:40.093955 2901 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:25:40.094145 kubelet[2901]: I1216 12:25:40.094128 2901 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:25:40.094177 kubelet[2901]: I1216 12:25:40.094146 2901 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:25:40.094710 kubelet[2901]: I1216 12:25:40.094558 2901 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:25:40.096337 kubelet[2901]: E1216 12:25:40.095687 2901 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:25:40.156775 kubelet[2901]: I1216 12:25:40.156736 2901 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.156886 kubelet[2901]: I1216 12:25:40.156793 2901 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.157066 kubelet[2901]: I1216 12:25:40.157031 2901 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.164490 kubelet[2901]: E1216 12:25:40.164172 2901 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-7-179ea8c226\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.165080 kubelet[2901]: E1216 12:25:40.165045 2901 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" already exists" pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.203868 kubelet[2901]: I1216 12:25:40.203845 2901 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.214636 kubelet[2901]: I1216 12:25:40.214576 2901 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.214802 kubelet[2901]: I1216 12:25:40.214723 2901 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.350762 kubelet[2901]: I1216 12:25:40.350581 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.350762 kubelet[2901]: I1216 12:25:40.350631 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.350762 kubelet[2901]: I1216 12:25:40.350654 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.350762 kubelet[2901]: I1216 12:25:40.350680 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.351133 kubelet[2901]: I1216 12:25:40.350797 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c8c5f44ae6e325b3b3b9c5f45f13a0d-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-7-179ea8c226\" (UID: \"3c8c5f44ae6e325b3b3b9c5f45f13a0d\") " pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.351133 kubelet[2901]: I1216 12:25:40.350871 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c8c5f44ae6e325b3b3b9c5f45f13a0d-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-7-179ea8c226\" (UID: \"3c8c5f44ae6e325b3b3b9c5f45f13a0d\") " pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.351133 kubelet[2901]: I1216 12:25:40.350917 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c8c5f44ae6e325b3b3b9c5f45f13a0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-7-179ea8c226\" (UID: \"3c8c5f44ae6e325b3b3b9c5f45f13a0d\") " pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.351133 kubelet[2901]: I1216 12:25:40.350989 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c600c3a7d9ba9ec4436dc55d2d5f3a86-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-7-179ea8c226\" (UID: \"c600c3a7d9ba9ec4436dc55d2d5f3a86\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:40.351133 kubelet[2901]: I1216 12:25:40.351035 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57ec2e6a16ed56b25e8732f92da7f348-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-7-179ea8c226\" (UID: \"57ec2e6a16ed56b25e8732f92da7f348\") " pod="kube-system/kube-scheduler-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:41.024423 kubelet[2901]: I1216 12:25:41.024372 2901 apiserver.go:52] "Watching apiserver" Dec 16 12:25:41.049941 kubelet[2901]: I1216 12:25:41.049852 2901 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:25:41.074340 kubelet[2901]: I1216 12:25:41.074277 2901 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:41.081099 kubelet[2901]: E1216 12:25:41.080380 2901 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-7-179ea8c226\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" Dec 16 12:25:41.093920 kubelet[2901]: I1216 12:25:41.093499 2901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515-1-0-7-179ea8c226" podStartSLOduration=3.0934812799999998 podStartE2EDuration="3.09348128s" podCreationTimestamp="2025-12-16 12:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:25:41.093285959 +0000 UTC m=+1.126481112" watchObservedRunningTime="2025-12-16 12:25:41.09348128 +0000 UTC m=+1.126676473" Dec 16 12:25:41.116120 kubelet[2901]: I1216 12:25:41.115764 2901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515-1-0-7-179ea8c226" podStartSLOduration=3.115743751 podStartE2EDuration="3.115743751s" podCreationTimestamp="2025-12-16 12:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:25:41.11518075 +0000 UTC m=+1.148375903" watchObservedRunningTime="2025-12-16 12:25:41.115743751 +0000 UTC m=+1.148938944" Dec 16 12:25:41.117102 kubelet[2901]: I1216 12:25:41.116942 2901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515-1-0-7-179ea8c226" podStartSLOduration=1.116495314 podStartE2EDuration="1.116495314s" podCreationTimestamp="2025-12-16 12:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:25:41.104111194 +0000 UTC m=+1.137306347" watchObservedRunningTime="2025-12-16 12:25:41.116495314 +0000 UTC m=+1.149690467" Dec 16 12:25:41.640419 update_engine[1679]: I20251216 12:25:41.639905 1679 update_attempter.cc:509] Updating boot flags... Dec 16 12:25:45.992552 kubelet[2901]: I1216 12:25:45.992516 2901 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:25:45.993285 containerd[1706]: time="2025-12-16T12:25:45.993189073Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:25:45.993543 kubelet[2901]: I1216 12:25:45.993373 2901 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:25:46.680987 systemd[1]: Created slice kubepods-besteffort-pod2316e4ca_48ba_401e_a71d_5c1a11b6b8c1.slice - libcontainer container kubepods-besteffort-pod2316e4ca_48ba_401e_a71d_5c1a11b6b8c1.slice. Dec 16 12:25:46.692189 kubelet[2901]: I1216 12:25:46.691961 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2316e4ca-48ba-401e-a71d-5c1a11b6b8c1-kube-proxy\") pod \"kube-proxy-6p7kq\" (UID: \"2316e4ca-48ba-401e-a71d-5c1a11b6b8c1\") " pod="kube-system/kube-proxy-6p7kq" Dec 16 12:25:46.692189 kubelet[2901]: I1216 12:25:46.692173 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2316e4ca-48ba-401e-a71d-5c1a11b6b8c1-lib-modules\") pod \"kube-proxy-6p7kq\" (UID: \"2316e4ca-48ba-401e-a71d-5c1a11b6b8c1\") " pod="kube-system/kube-proxy-6p7kq" Dec 16 12:25:46.692189 kubelet[2901]: I1216 12:25:46.692194 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj2b4\" (UniqueName: \"kubernetes.io/projected/2316e4ca-48ba-401e-a71d-5c1a11b6b8c1-kube-api-access-jj2b4\") pod \"kube-proxy-6p7kq\" (UID: \"2316e4ca-48ba-401e-a71d-5c1a11b6b8c1\") " pod="kube-system/kube-proxy-6p7kq" Dec 16 12:25:46.692404 kubelet[2901]: I1216 12:25:46.692213 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2316e4ca-48ba-401e-a71d-5c1a11b6b8c1-xtables-lock\") pod \"kube-proxy-6p7kq\" (UID: \"2316e4ca-48ba-401e-a71d-5c1a11b6b8c1\") " pod="kube-system/kube-proxy-6p7kq" Dec 16 12:25:46.997083 containerd[1706]: time="2025-12-16T12:25:46.997043269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6p7kq,Uid:2316e4ca-48ba-401e-a71d-5c1a11b6b8c1,Namespace:kube-system,Attempt:0,}" Dec 16 12:25:47.019788 containerd[1706]: time="2025-12-16T12:25:47.019741422Z" level=info msg="connecting to shim 47709093ec086ec5f0fa97e8f27e55d9d23b43f82e8cad81de725e63d5abf055" address="unix:///run/containerd/s/5485ae63194c5339cb10ce9a461d8a1926335015ff863b5c38ab1bf5f31b0ed8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:47.044539 systemd[1]: Started cri-containerd-47709093ec086ec5f0fa97e8f27e55d9d23b43f82e8cad81de725e63d5abf055.scope - libcontainer container 47709093ec086ec5f0fa97e8f27e55d9d23b43f82e8cad81de725e63d5abf055. Dec 16 12:25:47.053000 audit: BPF prog-id=133 op=LOAD Dec 16 12:25:47.055331 kernel: kauditd_printk_skb: 164 callbacks suppressed Dec 16 12:25:47.055404 kernel: audit: type=1334 audit(1765887947.053:443): prog-id=133 op=LOAD Dec 16 12:25:47.055421 kernel: audit: type=1334 audit(1765887947.054:444): prog-id=134 op=LOAD Dec 16 12:25:47.054000 audit: BPF prog-id=134 op=LOAD Dec 16 12:25:47.054000 audit[2990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.059436 kernel: audit: type=1300 audit(1765887947.054:444): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.059587 kernel: audit: type=1327 audit(1765887947.054:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.055000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:25:47.063389 kernel: audit: type=1334 audit(1765887947.055:445): prog-id=134 op=UNLOAD Dec 16 12:25:47.055000 audit[2990]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.066922 kernel: audit: type=1300 audit(1765887947.055:445): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.070339 kernel: audit: type=1327 audit(1765887947.055:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.055000 audit: BPF prog-id=135 op=LOAD Dec 16 12:25:47.071626 kernel: audit: type=1334 audit(1765887947.055:446): prog-id=135 op=LOAD Dec 16 12:25:47.055000 audit[2990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.075322 kernel: audit: type=1300 audit(1765887947.055:446): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.075445 kernel: audit: type=1327 audit(1765887947.055:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.055000 audit: BPF prog-id=136 op=LOAD Dec 16 12:25:47.055000 audit[2990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.059000 audit: BPF prog-id=136 op=UNLOAD Dec 16 12:25:47.059000 audit[2990]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.059000 audit: BPF prog-id=135 op=UNLOAD Dec 16 12:25:47.059000 audit[2990]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.059000 audit: BPF prog-id=137 op=LOAD Dec 16 12:25:47.059000 audit[2990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2980 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373039303933656330383665633566306661393765386632376535 Dec 16 12:25:47.094930 containerd[1706]: time="2025-12-16T12:25:47.094887984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6p7kq,Uid:2316e4ca-48ba-401e-a71d-5c1a11b6b8c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"47709093ec086ec5f0fa97e8f27e55d9d23b43f82e8cad81de725e63d5abf055\"" Dec 16 12:25:47.102460 containerd[1706]: time="2025-12-16T12:25:47.102420409Z" level=info msg="CreateContainer within sandbox \"47709093ec086ec5f0fa97e8f27e55d9d23b43f82e8cad81de725e63d5abf055\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:25:47.119225 containerd[1706]: time="2025-12-16T12:25:47.116573254Z" level=info msg="Container e08e6fd9a78b9149cb459f8d989b9c0ef9d23391acbe098bc31fddff5ac019c5: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:47.130405 containerd[1706]: time="2025-12-16T12:25:47.130351019Z" level=info msg="CreateContainer within sandbox \"47709093ec086ec5f0fa97e8f27e55d9d23b43f82e8cad81de725e63d5abf055\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e08e6fd9a78b9149cb459f8d989b9c0ef9d23391acbe098bc31fddff5ac019c5\"" Dec 16 12:25:47.131544 containerd[1706]: time="2025-12-16T12:25:47.131512622Z" level=info msg="StartContainer for \"e08e6fd9a78b9149cb459f8d989b9c0ef9d23391acbe098bc31fddff5ac019c5\"" Dec 16 12:25:47.133108 containerd[1706]: time="2025-12-16T12:25:47.133073627Z" level=info msg="connecting to shim e08e6fd9a78b9149cb459f8d989b9c0ef9d23391acbe098bc31fddff5ac019c5" address="unix:///run/containerd/s/5485ae63194c5339cb10ce9a461d8a1926335015ff863b5c38ab1bf5f31b0ed8" protocol=ttrpc version=3 Dec 16 12:25:47.162804 systemd[1]: Started cri-containerd-e08e6fd9a78b9149cb459f8d989b9c0ef9d23391acbe098bc31fddff5ac019c5.scope - libcontainer container e08e6fd9a78b9149cb459f8d989b9c0ef9d23391acbe098bc31fddff5ac019c5. Dec 16 12:25:47.167496 systemd[1]: Created slice kubepods-besteffort-pod42d78fd9_5855_41b6_acb7_40210908f8e0.slice - libcontainer container kubepods-besteffort-pod42d78fd9_5855_41b6_acb7_40210908f8e0.slice. Dec 16 12:25:47.196009 kubelet[2901]: I1216 12:25:47.195880 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w9bv\" (UniqueName: \"kubernetes.io/projected/42d78fd9-5855-41b6-acb7-40210908f8e0-kube-api-access-4w9bv\") pod \"tigera-operator-65cdcdfd6d-crhxs\" (UID: \"42d78fd9-5855-41b6-acb7-40210908f8e0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-crhxs" Dec 16 12:25:47.196009 kubelet[2901]: I1216 12:25:47.195999 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42d78fd9-5855-41b6-acb7-40210908f8e0-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-crhxs\" (UID: \"42d78fd9-5855-41b6-acb7-40210908f8e0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-crhxs" Dec 16 12:25:47.217000 audit: BPF prog-id=138 op=LOAD Dec 16 12:25:47.217000 audit[3016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2980 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386536666439613738623931343963623435396638643938396239 Dec 16 12:25:47.217000 audit: BPF prog-id=139 op=LOAD Dec 16 12:25:47.217000 audit[3016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2980 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386536666439613738623931343963623435396638643938396239 Dec 16 12:25:47.217000 audit: BPF prog-id=139 op=UNLOAD Dec 16 12:25:47.217000 audit[3016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386536666439613738623931343963623435396638643938396239 Dec 16 12:25:47.217000 audit: BPF prog-id=138 op=UNLOAD Dec 16 12:25:47.217000 audit[3016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386536666439613738623931343963623435396638643938396239 Dec 16 12:25:47.217000 audit: BPF prog-id=140 op=LOAD Dec 16 12:25:47.217000 audit[3016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2980 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530386536666439613738623931343963623435396638643938396239 Dec 16 12:25:47.238320 containerd[1706]: time="2025-12-16T12:25:47.238199086Z" level=info msg="StartContainer for \"e08e6fd9a78b9149cb459f8d989b9c0ef9d23391acbe098bc31fddff5ac019c5\" returns successfully" Dec 16 12:25:47.464000 audit[3087]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.464000 audit[3087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd104e390 a2=0 a3=1 items=0 ppid=3030 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:25:47.464000 audit[3086]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.464000 audit[3086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe38c9e20 a2=0 a3=1 items=0 ppid=3030 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:25:47.467000 audit[3091]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.467000 audit[3091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3a1d7c0 a2=0 a3=1 items=0 ppid=3030 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:25:47.467000 audit[3090]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.467000 audit[3090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc184abd0 a2=0 a3=1 items=0 ppid=3030 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:25:47.470000 audit[3093]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.470000 audit[3093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc573920 a2=0 a3=1 items=0 ppid=3030 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:25:47.470000 audit[3094]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.470000 audit[3094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe2d193e0 a2=0 a3=1 items=0 ppid=3030 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:25:47.474106 containerd[1706]: time="2025-12-16T12:25:47.474067407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-crhxs,Uid:42d78fd9-5855-41b6-acb7-40210908f8e0,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:25:47.492022 containerd[1706]: time="2025-12-16T12:25:47.491976944Z" level=info msg="connecting to shim d7350c866bfa7e896b607f7679907710e90492fadcb80635fe8c0a1f3b34bedf" address="unix:///run/containerd/s/05820b25d913420fa3fe8602da2646c990a4ba386391b331ccb72282ffd154d2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:47.514510 systemd[1]: Started cri-containerd-d7350c866bfa7e896b607f7679907710e90492fadcb80635fe8c0a1f3b34bedf.scope - libcontainer container d7350c866bfa7e896b607f7679907710e90492fadcb80635fe8c0a1f3b34bedf. Dec 16 12:25:47.523000 audit: BPF prog-id=141 op=LOAD Dec 16 12:25:47.524000 audit: BPF prog-id=142 op=LOAD Dec 16 12:25:47.524000 audit[3115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3103 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333530633836366266613765383936623630376637363739393037 Dec 16 12:25:47.524000 audit: BPF prog-id=142 op=UNLOAD Dec 16 12:25:47.524000 audit[3115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3103 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333530633836366266613765383936623630376637363739393037 Dec 16 12:25:47.524000 audit: BPF prog-id=143 op=LOAD Dec 16 12:25:47.524000 audit[3115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3103 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333530633836366266613765383936623630376637363739393037 Dec 16 12:25:47.524000 audit: BPF prog-id=144 op=LOAD Dec 16 12:25:47.524000 audit[3115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3103 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333530633836366266613765383936623630376637363739393037 Dec 16 12:25:47.524000 audit: BPF prog-id=144 op=UNLOAD Dec 16 12:25:47.524000 audit[3115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3103 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333530633836366266613765383936623630376637363739393037 Dec 16 12:25:47.524000 audit: BPF prog-id=143 op=UNLOAD Dec 16 12:25:47.524000 audit[3115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3103 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333530633836366266613765383936623630376637363739393037 Dec 16 12:25:47.524000 audit: BPF prog-id=145 op=LOAD Dec 16 12:25:47.524000 audit[3115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3103 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437333530633836366266613765383936623630376637363739393037 Dec 16 12:25:47.549012 containerd[1706]: time="2025-12-16T12:25:47.548956568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-crhxs,Uid:42d78fd9-5855-41b6-acb7-40210908f8e0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d7350c866bfa7e896b607f7679907710e90492fadcb80635fe8c0a1f3b34bedf\"" Dec 16 12:25:47.551459 containerd[1706]: time="2025-12-16T12:25:47.551430816Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:25:47.568000 audit[3140]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.568000 audit[3140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc2fa97a0 a2=0 a3=1 items=0 ppid=3030 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:25:47.570000 audit[3142]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.570000 audit[3142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd29879b0 a2=0 a3=1 items=0 ppid=3030 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 12:25:47.574000 audit[3145]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.574000 audit[3145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdd2b8f90 a2=0 a3=1 items=0 ppid=3030 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 12:25:47.575000 audit[3146]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.575000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9a215f0 a2=0 a3=1 items=0 ppid=3030 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:25:47.578000 audit[3148]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.578000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff9d9f000 a2=0 a3=1 items=0 ppid=3030 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:25:47.579000 audit[3149]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.579000 audit[3149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa072cf0 a2=0 a3=1 items=0 ppid=3030 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:25:47.582000 audit[3151]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.582000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffefc40fd0 a2=0 a3=1 items=0 ppid=3030 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:25:47.586000 audit[3154]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.586000 audit[3154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd20a6300 a2=0 a3=1 items=0 ppid=3030 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:25:47.587000 audit[3155]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.587000 audit[3155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2fb23a0 a2=0 a3=1 items=0 ppid=3030 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:25:47.589000 audit[3157]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.589000 audit[3157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffacb0760 a2=0 a3=1 items=0 ppid=3030 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:25:47.591000 audit[3158]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.591000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1480cc0 a2=0 a3=1 items=0 ppid=3030 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:25:47.594000 audit[3160]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.594000 audit[3160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7137180 a2=0 a3=1 items=0 ppid=3030 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.594000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 12:25:47.598000 audit[3163]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.598000 audit[3163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdf92fad0 a2=0 a3=1 items=0 ppid=3030 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 12:25:47.602000 audit[3166]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.602000 audit[3166]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdf3e3910 a2=0 a3=1 items=0 ppid=3030 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 12:25:47.603000 audit[3167]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.603000 audit[3167]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcc7060c0 a2=0 a3=1 items=0 ppid=3030 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:25:47.606000 audit[3169]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.606000 audit[3169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff60559f0 a2=0 a3=1 items=0 ppid=3030 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.606000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:25:47.609000 audit[3172]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.609000 audit[3172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc25d18c0 a2=0 a3=1 items=0 ppid=3030 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:25:47.610000 audit[3173]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.610000 audit[3173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcddcf8d0 a2=0 a3=1 items=0 ppid=3030 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:25:47.613000 audit[3175]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:47.613000 audit[3175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff0d97350 a2=0 a3=1 items=0 ppid=3030 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:25:47.636000 audit[3181]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:47.636000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc5c019d0 a2=0 a3=1 items=0 ppid=3030 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:47.647000 audit[3181]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:47.647000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc5c019d0 a2=0 a3=1 items=0 ppid=3030 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:47.649000 audit[3186]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.649000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd63644a0 a2=0 a3=1 items=0 ppid=3030 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:25:47.651000 audit[3188]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.651000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff1c13b70 a2=0 a3=1 items=0 ppid=3030 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 12:25:47.655000 audit[3191]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.655000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd486f340 a2=0 a3=1 items=0 ppid=3030 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 12:25:47.656000 audit[3192]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.656000 audit[3192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeef68390 a2=0 a3=1 items=0 ppid=3030 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.656000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:25:47.659000 audit[3194]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.659000 audit[3194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe93e0a50 a2=0 a3=1 items=0 ppid=3030 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:25:47.660000 audit[3195]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.660000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2320850 a2=0 a3=1 items=0 ppid=3030 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:25:47.662000 audit[3197]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.662000 audit[3197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeb224ca0 a2=0 a3=1 items=0 ppid=3030 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:25:47.666000 audit[3200]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3200 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.666000 audit[3200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd86b5e80 a2=0 a3=1 items=0 ppid=3030 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:25:47.667000 audit[3201]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.667000 audit[3201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd0299c0 a2=0 a3=1 items=0 ppid=3030 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:25:47.669000 audit[3203]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.669000 audit[3203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffda5f1010 a2=0 a3=1 items=0 ppid=3030 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:25:47.670000 audit[3204]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.670000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff5b4a5d0 a2=0 a3=1 items=0 ppid=3030 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:25:47.673000 audit[3206]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.673000 audit[3206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffed268c90 a2=0 a3=1 items=0 ppid=3030 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 12:25:47.676000 audit[3209]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.676000 audit[3209]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffce65af30 a2=0 a3=1 items=0 ppid=3030 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 12:25:47.680000 audit[3212]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.680000 audit[3212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe8077410 a2=0 a3=1 items=0 ppid=3030 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 12:25:47.681000 audit[3213]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.681000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc059a920 a2=0 a3=1 items=0 ppid=3030 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:25:47.684000 audit[3215]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.684000 audit[3215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe1765750 a2=0 a3=1 items=0 ppid=3030 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:25:47.687000 audit[3218]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.687000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffebed12e0 a2=0 a3=1 items=0 ppid=3030 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:25:47.689000 audit[3219]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.689000 audit[3219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd541e7d0 a2=0 a3=1 items=0 ppid=3030 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:25:47.691000 audit[3221]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.691000 audit[3221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd9c91070 a2=0 a3=1 items=0 ppid=3030 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:25:47.692000 audit[3222]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.692000 audit[3222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdec15ea0 a2=0 a3=1 items=0 ppid=3030 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:25:47.695000 audit[3224]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.695000 audit[3224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff4848090 a2=0 a3=1 items=0 ppid=3030 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:25:47.698000 audit[3227]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:47.698000 audit[3227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff8ac5570 a2=0 a3=1 items=0 ppid=3030 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:25:47.702000 audit[3229]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:25:47.702000 audit[3229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffc291c0b0 a2=0 a3=1 items=0 ppid=3030 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.702000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:47.702000 audit[3229]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:25:47.702000 audit[3229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc291c0b0 a2=0 a3=1 items=0 ppid=3030 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.702000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:49.377334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330371808.mount: Deactivated successfully. Dec 16 12:25:49.662661 containerd[1706]: time="2025-12-16T12:25:49.662424181Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:49.664042 containerd[1706]: time="2025-12-16T12:25:49.663981626Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:25:49.665644 containerd[1706]: time="2025-12-16T12:25:49.665565191Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:49.667771 containerd[1706]: time="2025-12-16T12:25:49.667717558Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:49.668381 containerd[1706]: time="2025-12-16T12:25:49.668353360Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.116886144s" Dec 16 12:25:49.668409 containerd[1706]: time="2025-12-16T12:25:49.668386800Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:25:49.673306 containerd[1706]: time="2025-12-16T12:25:49.673244496Z" level=info msg="CreateContainer within sandbox \"d7350c866bfa7e896b607f7679907710e90492fadcb80635fe8c0a1f3b34bedf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:25:49.683025 containerd[1706]: time="2025-12-16T12:25:49.681070601Z" level=info msg="Container 7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:49.691812 containerd[1706]: time="2025-12-16T12:25:49.691755795Z" level=info msg="CreateContainer within sandbox \"d7350c866bfa7e896b607f7679907710e90492fadcb80635fe8c0a1f3b34bedf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594\"" Dec 16 12:25:49.692343 containerd[1706]: time="2025-12-16T12:25:49.692315757Z" level=info msg="StartContainer for \"7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594\"" Dec 16 12:25:49.693129 containerd[1706]: time="2025-12-16T12:25:49.693104400Z" level=info msg="connecting to shim 7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594" address="unix:///run/containerd/s/05820b25d913420fa3fe8602da2646c990a4ba386391b331ccb72282ffd154d2" protocol=ttrpc version=3 Dec 16 12:25:49.719532 systemd[1]: Started cri-containerd-7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594.scope - libcontainer container 7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594. Dec 16 12:25:49.729000 audit: BPF prog-id=146 op=LOAD Dec 16 12:25:49.730000 audit: BPF prog-id=147 op=LOAD Dec 16 12:25:49.730000 audit[3238]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=3103 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:49.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346530323932366666653232376337623137663265643564303932 Dec 16 12:25:49.730000 audit: BPF prog-id=147 op=UNLOAD Dec 16 12:25:49.730000 audit[3238]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3103 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:49.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346530323932366666653232376337623137663265643564303932 Dec 16 12:25:49.730000 audit: BPF prog-id=148 op=LOAD Dec 16 12:25:49.730000 audit[3238]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3103 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:49.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346530323932366666653232376337623137663265643564303932 Dec 16 12:25:49.730000 audit: BPF prog-id=149 op=LOAD Dec 16 12:25:49.730000 audit[3238]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3103 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:49.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346530323932366666653232376337623137663265643564303932 Dec 16 12:25:49.730000 audit: BPF prog-id=149 op=UNLOAD Dec 16 12:25:49.730000 audit[3238]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3103 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:49.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346530323932366666653232376337623137663265643564303932 Dec 16 12:25:49.730000 audit: BPF prog-id=148 op=UNLOAD Dec 16 12:25:49.730000 audit[3238]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3103 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:49.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346530323932366666653232376337623137663265643564303932 Dec 16 12:25:49.730000 audit: BPF prog-id=150 op=LOAD Dec 16 12:25:49.730000 audit[3238]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3103 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:49.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346530323932366666653232376337623137663265643564303932 Dec 16 12:25:49.750558 containerd[1706]: time="2025-12-16T12:25:49.750520225Z" level=info msg="StartContainer for \"7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594\" returns successfully" Dec 16 12:25:50.106950 kubelet[2901]: I1216 12:25:50.106877 2901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6p7kq" podStartSLOduration=4.106860693 podStartE2EDuration="4.106860693s" podCreationTimestamp="2025-12-16 12:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:25:48.101167628 +0000 UTC m=+8.134362781" watchObservedRunningTime="2025-12-16 12:25:50.106860693 +0000 UTC m=+10.140055846" Dec 16 12:25:50.107351 kubelet[2901]: I1216 12:25:50.106998 2901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-crhxs" podStartSLOduration=0.987775463 podStartE2EDuration="3.106993974s" podCreationTimestamp="2025-12-16 12:25:47 +0000 UTC" firstStartedPulling="2025-12-16 12:25:47.550172932 +0000 UTC m=+7.583368045" lastFinishedPulling="2025-12-16 12:25:49.669391403 +0000 UTC m=+9.702586556" observedRunningTime="2025-12-16 12:25:50.106656173 +0000 UTC m=+10.139851326" watchObservedRunningTime="2025-12-16 12:25:50.106993974 +0000 UTC m=+10.140189087" Dec 16 12:25:55.056362 sudo[1943]: pam_unix(sudo:session): session closed for user root Dec 16 12:25:55.056000 audit[1943]: USER_END pid=1943 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:55.060755 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:25:55.060868 kernel: audit: type=1106 audit(1765887955.056:523): pid=1943 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:55.060924 kernel: audit: type=1104 audit(1765887955.056:524): pid=1943 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:55.056000 audit[1943]: CRED_DISP pid=1943 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:55.229388 sshd[1942]: Connection closed by 139.178.68.195 port 59352 Dec 16 12:25:55.229786 sshd-session[1939]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:55.233000 audit[1939]: USER_END pid=1939 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:55.236695 systemd[1]: sshd@6-10.0.21.226:22-139.178.68.195:59352.service: Deactivated successfully. Dec 16 12:25:55.233000 audit[1939]: CRED_DISP pid=1939 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:55.240240 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:25:55.240728 systemd[1]: session-7.scope: Consumed 7.988s CPU time, 218.9M memory peak. Dec 16 12:25:55.241512 kernel: audit: type=1106 audit(1765887955.233:525): pid=1939 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:55.241576 kernel: audit: type=1104 audit(1765887955.233:526): pid=1939 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:25:55.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.21.226:22-139.178.68.195:59352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:55.242854 systemd-logind[1677]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:25:55.243720 systemd-logind[1677]: Removed session 7. Dec 16 12:25:55.245387 kernel: audit: type=1131 audit(1765887955.237:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.21.226:22-139.178.68.195:59352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:56.560000 audit[3330]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:56.560000 audit[3330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffed2bf5b0 a2=0 a3=1 items=0 ppid=3030 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:56.566352 kernel: audit: type=1325 audit(1765887956.560:528): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:56.566445 kernel: audit: type=1300 audit(1765887956.560:528): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffed2bf5b0 a2=0 a3=1 items=0 ppid=3030 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:56.566468 kernel: audit: type=1327 audit(1765887956.560:528): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:56.560000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:56.572000 audit[3330]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:56.572000 audit[3330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffed2bf5b0 a2=0 a3=1 items=0 ppid=3030 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:56.580005 kernel: audit: type=1325 audit(1765887956.572:529): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:56.580104 kernel: audit: type=1300 audit(1765887956.572:529): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffed2bf5b0 a2=0 a3=1 items=0 ppid=3030 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:56.572000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:57.591000 audit[3332]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:57.591000 audit[3332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc01f1320 a2=0 a3=1 items=0 ppid=3030 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:57.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:57.597000 audit[3332]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:57.597000 audit[3332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc01f1320 a2=0 a3=1 items=0 ppid=3030 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:57.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:00.303000 audit[3334]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:00.305905 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:26:00.306540 kernel: audit: type=1325 audit(1765887960.303:532): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:00.303000 audit[3334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc244cac0 a2=0 a3=1 items=0 ppid=3030 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:00.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:00.314104 kernel: audit: type=1300 audit(1765887960.303:532): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc244cac0 a2=0 a3=1 items=0 ppid=3030 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:00.314350 kernel: audit: type=1327 audit(1765887960.303:532): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:00.314490 kernel: audit: type=1325 audit(1765887960.309:533): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:00.309000 audit[3334]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:00.309000 audit[3334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc244cac0 a2=0 a3=1 items=0 ppid=3030 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:00.320348 kernel: audit: type=1300 audit(1765887960.309:533): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc244cac0 a2=0 a3=1 items=0 ppid=3030 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:00.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:00.322078 kernel: audit: type=1327 audit(1765887960.309:533): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:01.327000 audit[3336]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:01.327000 audit[3336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff88358c0 a2=0 a3=1 items=0 ppid=3030 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:01.333760 kernel: audit: type=1325 audit(1765887961.327:534): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:01.333820 kernel: audit: type=1300 audit(1765887961.327:534): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff88358c0 a2=0 a3=1 items=0 ppid=3030 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:01.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:01.335499 kernel: audit: type=1327 audit(1765887961.327:534): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:01.334000 audit[3336]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:01.337469 kernel: audit: type=1325 audit(1765887961.334:535): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:01.334000 audit[3336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff88358c0 a2=0 a3=1 items=0 ppid=3030 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:01.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:02.892000 audit[3338]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:02.892000 audit[3338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffe33b320 a2=0 a3=1 items=0 ppid=3030 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:02.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:02.897000 audit[3338]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:02.897000 audit[3338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe33b320 a2=0 a3=1 items=0 ppid=3030 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:02.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:02.928504 systemd[1]: Created slice kubepods-besteffort-podf7befffc_b1d1_446d_a3c7_c468901efc2d.slice - libcontainer container kubepods-besteffort-podf7befffc_b1d1_446d_a3c7_c468901efc2d.slice. Dec 16 12:26:02.994438 kubelet[2901]: I1216 12:26:02.994393 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5m8m\" (UniqueName: \"kubernetes.io/projected/f7befffc-b1d1-446d-a3c7-c468901efc2d-kube-api-access-d5m8m\") pod \"calico-typha-559c6f94f4-zntv4\" (UID: \"f7befffc-b1d1-446d-a3c7-c468901efc2d\") " pod="calico-system/calico-typha-559c6f94f4-zntv4" Dec 16 12:26:02.994438 kubelet[2901]: I1216 12:26:02.994445 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f7befffc-b1d1-446d-a3c7-c468901efc2d-typha-certs\") pod \"calico-typha-559c6f94f4-zntv4\" (UID: \"f7befffc-b1d1-446d-a3c7-c468901efc2d\") " pod="calico-system/calico-typha-559c6f94f4-zntv4" Dec 16 12:26:02.994943 kubelet[2901]: I1216 12:26:02.994520 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7befffc-b1d1-446d-a3c7-c468901efc2d-tigera-ca-bundle\") pod \"calico-typha-559c6f94f4-zntv4\" (UID: \"f7befffc-b1d1-446d-a3c7-c468901efc2d\") " pod="calico-system/calico-typha-559c6f94f4-zntv4" Dec 16 12:26:03.217640 systemd[1]: Created slice kubepods-besteffort-pod876d359c_ae6f_45b3_aede_21cef6c46130.slice - libcontainer container kubepods-besteffort-pod876d359c_ae6f_45b3_aede_21cef6c46130.slice. Dec 16 12:26:03.236343 containerd[1706]: time="2025-12-16T12:26:03.236285779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-559c6f94f4-zntv4,Uid:f7befffc-b1d1-446d-a3c7-c468901efc2d,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:03.262997 containerd[1706]: time="2025-12-16T12:26:03.262942145Z" level=info msg="connecting to shim 44163c4870950cafa4e094b3f6ed2310acaf88554be38f7162bcee2ea9509b98" address="unix:///run/containerd/s/8f28d178fcdbf7be0f324ad40fa0e15b62600bc85ce1c654306440a61cd4f182" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:03.291634 systemd[1]: Started cri-containerd-44163c4870950cafa4e094b3f6ed2310acaf88554be38f7162bcee2ea9509b98.scope - libcontainer container 44163c4870950cafa4e094b3f6ed2310acaf88554be38f7162bcee2ea9509b98. Dec 16 12:26:03.296779 kubelet[2901]: I1216 12:26:03.296742 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/876d359c-ae6f-45b3-aede-21cef6c46130-cni-bin-dir\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.296779 kubelet[2901]: I1216 12:26:03.296781 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/876d359c-ae6f-45b3-aede-21cef6c46130-cni-log-dir\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.296918 kubelet[2901]: I1216 12:26:03.296799 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/876d359c-ae6f-45b3-aede-21cef6c46130-var-lib-calico\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.296918 kubelet[2901]: I1216 12:26:03.296827 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9sp5\" (UniqueName: \"kubernetes.io/projected/876d359c-ae6f-45b3-aede-21cef6c46130-kube-api-access-v9sp5\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.296918 kubelet[2901]: I1216 12:26:03.296847 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/876d359c-ae6f-45b3-aede-21cef6c46130-var-run-calico\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.296918 kubelet[2901]: I1216 12:26:03.296875 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/876d359c-ae6f-45b3-aede-21cef6c46130-flexvol-driver-host\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.297012 kubelet[2901]: I1216 12:26:03.296926 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/876d359c-ae6f-45b3-aede-21cef6c46130-lib-modules\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.297012 kubelet[2901]: I1216 12:26:03.296987 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/876d359c-ae6f-45b3-aede-21cef6c46130-policysync\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.297053 kubelet[2901]: I1216 12:26:03.297024 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/876d359c-ae6f-45b3-aede-21cef6c46130-tigera-ca-bundle\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.297053 kubelet[2901]: I1216 12:26:03.297041 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/876d359c-ae6f-45b3-aede-21cef6c46130-xtables-lock\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.297098 kubelet[2901]: I1216 12:26:03.297061 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/876d359c-ae6f-45b3-aede-21cef6c46130-node-certs\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.297098 kubelet[2901]: I1216 12:26:03.297094 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/876d359c-ae6f-45b3-aede-21cef6c46130-cni-net-dir\") pod \"calico-node-5jndz\" (UID: \"876d359c-ae6f-45b3-aede-21cef6c46130\") " pod="calico-system/calico-node-5jndz" Dec 16 12:26:03.303000 audit: BPF prog-id=151 op=LOAD Dec 16 12:26:03.304000 audit: BPF prog-id=152 op=LOAD Dec 16 12:26:03.304000 audit[3361]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3351 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434313633633438373039353063616661346530393462336636656432 Dec 16 12:26:03.304000 audit: BPF prog-id=152 op=UNLOAD Dec 16 12:26:03.304000 audit[3361]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434313633633438373039353063616661346530393462336636656432 Dec 16 12:26:03.304000 audit: BPF prog-id=153 op=LOAD Dec 16 12:26:03.304000 audit[3361]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3351 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434313633633438373039353063616661346530393462336636656432 Dec 16 12:26:03.304000 audit: BPF prog-id=154 op=LOAD Dec 16 12:26:03.304000 audit[3361]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3351 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434313633633438373039353063616661346530393462336636656432 Dec 16 12:26:03.304000 audit: BPF prog-id=154 op=UNLOAD Dec 16 12:26:03.304000 audit[3361]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434313633633438373039353063616661346530393462336636656432 Dec 16 12:26:03.304000 audit: BPF prog-id=153 op=UNLOAD Dec 16 12:26:03.304000 audit[3361]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434313633633438373039353063616661346530393462336636656432 Dec 16 12:26:03.304000 audit: BPF prog-id=155 op=LOAD Dec 16 12:26:03.304000 audit[3361]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3351 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434313633633438373039353063616661346530393462336636656432 Dec 16 12:26:03.331777 containerd[1706]: time="2025-12-16T12:26:03.331733887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-559c6f94f4-zntv4,Uid:f7befffc-b1d1-446d-a3c7-c468901efc2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"44163c4870950cafa4e094b3f6ed2310acaf88554be38f7162bcee2ea9509b98\"" Dec 16 12:26:03.333809 containerd[1706]: time="2025-12-16T12:26:03.333731693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:26:03.399355 kubelet[2901]: E1216 12:26:03.399323 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.399601 kubelet[2901]: W1216 12:26:03.399489 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.399601 kubelet[2901]: E1216 12:26:03.399514 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.399890 kubelet[2901]: E1216 12:26:03.399841 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.399890 kubelet[2901]: W1216 12:26:03.399854 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.399890 kubelet[2901]: E1216 12:26:03.399865 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.404847 kubelet[2901]: E1216 12:26:03.404779 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.404847 kubelet[2901]: W1216 12:26:03.404798 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.404847 kubelet[2901]: E1216 12:26:03.404813 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.410998 kubelet[2901]: E1216 12:26:03.410977 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.410998 kubelet[2901]: W1216 12:26:03.410995 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.411119 kubelet[2901]: E1216 12:26:03.411009 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.499572 kubelet[2901]: E1216 12:26:03.499361 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:03.527252 containerd[1706]: time="2025-12-16T12:26:03.526515035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5jndz,Uid:876d359c-ae6f-45b3-aede-21cef6c46130,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:03.556325 containerd[1706]: time="2025-12-16T12:26:03.555664169Z" level=info msg="connecting to shim 16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980" address="unix:///run/containerd/s/ae085f8d0b1617ae6daaed001441064d84dad5f77785dc3b2834c22964da76da" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:03.580847 systemd[1]: Started cri-containerd-16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980.scope - libcontainer container 16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980. Dec 16 12:26:03.583479 kubelet[2901]: E1216 12:26:03.583444 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.583479 kubelet[2901]: W1216 12:26:03.583474 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.583669 kubelet[2901]: E1216 12:26:03.583619 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.583927 kubelet[2901]: E1216 12:26:03.583896 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.583987 kubelet[2901]: W1216 12:26:03.583918 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.583987 kubelet[2901]: E1216 12:26:03.583967 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.584145 kubelet[2901]: E1216 12:26:03.584114 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.584145 kubelet[2901]: W1216 12:26:03.584141 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.584236 kubelet[2901]: E1216 12:26:03.584152 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.584635 kubelet[2901]: E1216 12:26:03.584608 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.584699 kubelet[2901]: W1216 12:26:03.584654 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.584699 kubelet[2901]: E1216 12:26:03.584669 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.584929 kubelet[2901]: E1216 12:26:03.584862 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.584929 kubelet[2901]: W1216 12:26:03.584908 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.584929 kubelet[2901]: E1216 12:26:03.584921 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.585156 kubelet[2901]: E1216 12:26:03.585122 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.585156 kubelet[2901]: W1216 12:26:03.585153 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.585370 kubelet[2901]: E1216 12:26:03.585171 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.585581 kubelet[2901]: E1216 12:26:03.585563 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.585581 kubelet[2901]: W1216 12:26:03.585579 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.585644 kubelet[2901]: E1216 12:26:03.585592 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.585820 kubelet[2901]: E1216 12:26:03.585807 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.585851 kubelet[2901]: W1216 12:26:03.585820 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.585851 kubelet[2901]: E1216 12:26:03.585846 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.586169 kubelet[2901]: E1216 12:26:03.586116 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.586205 kubelet[2901]: W1216 12:26:03.586191 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.586232 kubelet[2901]: E1216 12:26:03.586206 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.586647 kubelet[2901]: E1216 12:26:03.586623 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.586647 kubelet[2901]: W1216 12:26:03.586640 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.586647 kubelet[2901]: E1216 12:26:03.586652 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.587111 kubelet[2901]: E1216 12:26:03.586838 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.587111 kubelet[2901]: W1216 12:26:03.586852 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.587111 kubelet[2901]: E1216 12:26:03.586862 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.587111 kubelet[2901]: E1216 12:26:03.587032 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.587111 kubelet[2901]: W1216 12:26:03.587042 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.587111 kubelet[2901]: E1216 12:26:03.587050 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.587617 kubelet[2901]: E1216 12:26:03.587246 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.587617 kubelet[2901]: W1216 12:26:03.587257 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.587617 kubelet[2901]: E1216 12:26:03.587265 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.587940 kubelet[2901]: E1216 12:26:03.587666 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.587940 kubelet[2901]: W1216 12:26:03.587679 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.587940 kubelet[2901]: E1216 12:26:03.587690 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.588793 kubelet[2901]: E1216 12:26:03.588131 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.588793 kubelet[2901]: W1216 12:26:03.588146 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.588793 kubelet[2901]: E1216 12:26:03.588156 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.588793 kubelet[2901]: E1216 12:26:03.588361 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.588793 kubelet[2901]: W1216 12:26:03.588370 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.588793 kubelet[2901]: E1216 12:26:03.588379 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.588793 kubelet[2901]: E1216 12:26:03.588712 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.588793 kubelet[2901]: W1216 12:26:03.588731 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.588793 kubelet[2901]: E1216 12:26:03.588742 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.589204 kubelet[2901]: E1216 12:26:03.589075 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.589204 kubelet[2901]: W1216 12:26:03.589090 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.589204 kubelet[2901]: E1216 12:26:03.589101 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.589519 kubelet[2901]: E1216 12:26:03.589362 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.589519 kubelet[2901]: W1216 12:26:03.589373 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.589519 kubelet[2901]: E1216 12:26:03.589384 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.590194 kubelet[2901]: E1216 12:26:03.589696 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.590194 kubelet[2901]: W1216 12:26:03.589707 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.590194 kubelet[2901]: E1216 12:26:03.589718 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.595000 audit: BPF prog-id=156 op=LOAD Dec 16 12:26:03.596000 audit: BPF prog-id=157 op=LOAD Dec 16 12:26:03.596000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3412 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136646661663232303563373430636562646536356232306236386536 Dec 16 12:26:03.596000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:26:03.596000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136646661663232303563373430636562646536356232306236386536 Dec 16 12:26:03.596000 audit: BPF prog-id=158 op=LOAD Dec 16 12:26:03.596000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3412 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136646661663232303563373430636562646536356232306236386536 Dec 16 12:26:03.596000 audit: BPF prog-id=159 op=LOAD Dec 16 12:26:03.596000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3412 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136646661663232303563373430636562646536356232306236386536 Dec 16 12:26:03.596000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:26:03.596000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136646661663232303563373430636562646536356232306236386536 Dec 16 12:26:03.596000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:26:03.596000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136646661663232303563373430636562646536356232306236386536 Dec 16 12:26:03.596000 audit: BPF prog-id=160 op=LOAD Dec 16 12:26:03.596000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3412 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136646661663232303563373430636562646536356232306236386536 Dec 16 12:26:03.601129 kubelet[2901]: E1216 12:26:03.600833 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.601129 kubelet[2901]: W1216 12:26:03.600859 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.602333 kubelet[2901]: E1216 12:26:03.600881 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.602333 kubelet[2901]: I1216 12:26:03.601464 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1da2d440-02fc-4a40-abf1-80ffcd9275c1-socket-dir\") pod \"csi-node-driver-k8fpw\" (UID: \"1da2d440-02fc-4a40-abf1-80ffcd9275c1\") " pod="calico-system/csi-node-driver-k8fpw" Dec 16 12:26:03.605323 kubelet[2901]: E1216 12:26:03.602993 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.605323 kubelet[2901]: W1216 12:26:03.603026 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.605323 kubelet[2901]: E1216 12:26:03.603046 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.605323 kubelet[2901]: I1216 12:26:03.603089 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l75fb\" (UniqueName: \"kubernetes.io/projected/1da2d440-02fc-4a40-abf1-80ffcd9275c1-kube-api-access-l75fb\") pod \"csi-node-driver-k8fpw\" (UID: \"1da2d440-02fc-4a40-abf1-80ffcd9275c1\") " pod="calico-system/csi-node-driver-k8fpw" Dec 16 12:26:03.605742 kubelet[2901]: E1216 12:26:03.605582 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.605742 kubelet[2901]: W1216 12:26:03.605602 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.605742 kubelet[2901]: E1216 12:26:03.605620 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.605742 kubelet[2901]: I1216 12:26:03.605706 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1da2d440-02fc-4a40-abf1-80ffcd9275c1-kubelet-dir\") pod \"csi-node-driver-k8fpw\" (UID: \"1da2d440-02fc-4a40-abf1-80ffcd9275c1\") " pod="calico-system/csi-node-driver-k8fpw" Dec 16 12:26:03.606741 kubelet[2901]: E1216 12:26:03.606480 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.607091 kubelet[2901]: W1216 12:26:03.606860 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.607091 kubelet[2901]: E1216 12:26:03.606889 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.607247 kubelet[2901]: E1216 12:26:03.607232 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.607381 kubelet[2901]: W1216 12:26:03.607364 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.607469 kubelet[2901]: E1216 12:26:03.607454 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.607712 kubelet[2901]: E1216 12:26:03.607697 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.607874 kubelet[2901]: W1216 12:26:03.607773 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.607874 kubelet[2901]: E1216 12:26:03.607791 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.608319 kubelet[2901]: I1216 12:26:03.608017 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1da2d440-02fc-4a40-abf1-80ffcd9275c1-registration-dir\") pod \"csi-node-driver-k8fpw\" (UID: \"1da2d440-02fc-4a40-abf1-80ffcd9275c1\") " pod="calico-system/csi-node-driver-k8fpw" Dec 16 12:26:03.608484 kubelet[2901]: E1216 12:26:03.608466 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.608682 kubelet[2901]: W1216 12:26:03.608660 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.608788 kubelet[2901]: E1216 12:26:03.608763 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.611434 kubelet[2901]: E1216 12:26:03.611409 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.611659 kubelet[2901]: W1216 12:26:03.611507 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.611659 kubelet[2901]: E1216 12:26:03.611527 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.611811 kubelet[2901]: E1216 12:26:03.611799 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.611875 kubelet[2901]: W1216 12:26:03.611863 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.611922 kubelet[2901]: E1216 12:26:03.611913 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.611993 kubelet[2901]: I1216 12:26:03.611980 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1da2d440-02fc-4a40-abf1-80ffcd9275c1-varrun\") pod \"csi-node-driver-k8fpw\" (UID: \"1da2d440-02fc-4a40-abf1-80ffcd9275c1\") " pod="calico-system/csi-node-driver-k8fpw" Dec 16 12:26:03.612218 kubelet[2901]: E1216 12:26:03.612201 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.612286 kubelet[2901]: W1216 12:26:03.612273 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.612464 kubelet[2901]: E1216 12:26:03.612446 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.613682 kubelet[2901]: E1216 12:26:03.613536 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.613682 kubelet[2901]: W1216 12:26:03.613554 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.613682 kubelet[2901]: E1216 12:26:03.613567 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.613900 kubelet[2901]: E1216 12:26:03.613887 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.613975 kubelet[2901]: W1216 12:26:03.613962 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.614037 kubelet[2901]: E1216 12:26:03.614025 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.614664 kubelet[2901]: E1216 12:26:03.614364 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.614664 kubelet[2901]: W1216 12:26:03.614379 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.614664 kubelet[2901]: E1216 12:26:03.614391 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.614920 kubelet[2901]: E1216 12:26:03.614758 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.614920 kubelet[2901]: W1216 12:26:03.614774 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.614920 kubelet[2901]: E1216 12:26:03.614787 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.615786 kubelet[2901]: E1216 12:26:03.615765 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.615786 kubelet[2901]: W1216 12:26:03.615786 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.615922 kubelet[2901]: E1216 12:26:03.615799 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.621645 containerd[1706]: time="2025-12-16T12:26:03.621574861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5jndz,Uid:876d359c-ae6f-45b3-aede-21cef6c46130,Namespace:calico-system,Attempt:0,} returns sandbox id \"16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980\"" Dec 16 12:26:03.715212 kubelet[2901]: E1216 12:26:03.715058 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.715212 kubelet[2901]: W1216 12:26:03.715082 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.715212 kubelet[2901]: E1216 12:26:03.715101 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.715632 kubelet[2901]: E1216 12:26:03.715485 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.715632 kubelet[2901]: W1216 12:26:03.715499 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.715632 kubelet[2901]: E1216 12:26:03.715511 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.715802 kubelet[2901]: E1216 12:26:03.715787 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.715860 kubelet[2901]: W1216 12:26:03.715848 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.715920 kubelet[2901]: E1216 12:26:03.715908 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.716225 kubelet[2901]: E1216 12:26:03.716207 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.716337 kubelet[2901]: W1216 12:26:03.716315 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.716406 kubelet[2901]: E1216 12:26:03.716395 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.716837 kubelet[2901]: E1216 12:26:03.716728 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.716837 kubelet[2901]: W1216 12:26:03.716740 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.716837 kubelet[2901]: E1216 12:26:03.716750 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.716993 kubelet[2901]: E1216 12:26:03.716981 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.717054 kubelet[2901]: W1216 12:26:03.717042 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.717203 kubelet[2901]: E1216 12:26:03.717099 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.717372 kubelet[2901]: E1216 12:26:03.717357 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.717524 kubelet[2901]: W1216 12:26:03.717419 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.717524 kubelet[2901]: E1216 12:26:03.717434 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.717696 kubelet[2901]: E1216 12:26:03.717684 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.717837 kubelet[2901]: W1216 12:26:03.717746 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.717837 kubelet[2901]: E1216 12:26:03.717759 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.718143 kubelet[2901]: E1216 12:26:03.718024 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.718143 kubelet[2901]: W1216 12:26:03.718037 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.718143 kubelet[2901]: E1216 12:26:03.718046 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.718315 kubelet[2901]: E1216 12:26:03.718285 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.718488 kubelet[2901]: W1216 12:26:03.718373 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.718488 kubelet[2901]: E1216 12:26:03.718389 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.718622 kubelet[2901]: E1216 12:26:03.718610 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.718678 kubelet[2901]: W1216 12:26:03.718668 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.718729 kubelet[2901]: E1216 12:26:03.718720 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.719023 kubelet[2901]: E1216 12:26:03.718926 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.719023 kubelet[2901]: W1216 12:26:03.718937 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.719023 kubelet[2901]: E1216 12:26:03.718947 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.719181 kubelet[2901]: E1216 12:26:03.719169 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.719232 kubelet[2901]: W1216 12:26:03.719221 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.719397 kubelet[2901]: E1216 12:26:03.719275 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.719497 kubelet[2901]: E1216 12:26:03.719484 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.719563 kubelet[2901]: W1216 12:26:03.719550 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.719699 kubelet[2901]: E1216 12:26:03.719614 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.719986 kubelet[2901]: E1216 12:26:03.719878 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.719986 kubelet[2901]: W1216 12:26:03.719891 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.719986 kubelet[2901]: E1216 12:26:03.719901 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.720143 kubelet[2901]: E1216 12:26:03.720130 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.720191 kubelet[2901]: W1216 12:26:03.720182 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.720239 kubelet[2901]: E1216 12:26:03.720229 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.720595 kubelet[2901]: E1216 12:26:03.720493 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.720595 kubelet[2901]: W1216 12:26:03.720508 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.720595 kubelet[2901]: E1216 12:26:03.720518 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.720771 kubelet[2901]: E1216 12:26:03.720759 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.720821 kubelet[2901]: W1216 12:26:03.720811 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.720874 kubelet[2901]: E1216 12:26:03.720864 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.721193 kubelet[2901]: E1216 12:26:03.721090 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.721193 kubelet[2901]: W1216 12:26:03.721101 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.721193 kubelet[2901]: E1216 12:26:03.721111 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.721389 kubelet[2901]: E1216 12:26:03.721376 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.721454 kubelet[2901]: W1216 12:26:03.721442 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.721502 kubelet[2901]: E1216 12:26:03.721492 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.721857 kubelet[2901]: E1216 12:26:03.721736 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.721857 kubelet[2901]: W1216 12:26:03.721749 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.721857 kubelet[2901]: E1216 12:26:03.721758 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.722009 kubelet[2901]: E1216 12:26:03.721996 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.722202 kubelet[2901]: W1216 12:26:03.722048 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.722202 kubelet[2901]: E1216 12:26:03.722063 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.722429 kubelet[2901]: E1216 12:26:03.722409 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.722633 kubelet[2901]: W1216 12:26:03.722486 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.722633 kubelet[2901]: E1216 12:26:03.722502 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.722774 kubelet[2901]: E1216 12:26:03.722761 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.722826 kubelet[2901]: W1216 12:26:03.722816 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.722875 kubelet[2901]: E1216 12:26:03.722864 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.723158 kubelet[2901]: E1216 12:26:03.723101 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.723158 kubelet[2901]: W1216 12:26:03.723115 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.723158 kubelet[2901]: E1216 12:26:03.723126 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.732833 kubelet[2901]: E1216 12:26:03.732763 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:03.732833 kubelet[2901]: W1216 12:26:03.732782 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:03.732833 kubelet[2901]: E1216 12:26:03.732797 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:03.920000 audit[3514]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:03.920000 audit[3514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdca96990 a2=0 a3=1 items=0 ppid=3030 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:03.928000 audit[3514]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:03.928000 audit[3514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdca96990 a2=0 a3=1 items=0 ppid=3030 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.928000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:04.964153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917889734.mount: Deactivated successfully. Dec 16 12:26:05.055587 kubelet[2901]: E1216 12:26:05.055517 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:05.960911 containerd[1706]: time="2025-12-16T12:26:05.960852522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:05.962380 containerd[1706]: time="2025-12-16T12:26:05.962330007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 12:26:05.963513 containerd[1706]: time="2025-12-16T12:26:05.963472371Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:05.966318 containerd[1706]: time="2025-12-16T12:26:05.966223179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:05.967585 containerd[1706]: time="2025-12-16T12:26:05.967546704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.633772211s" Dec 16 12:26:05.967716 containerd[1706]: time="2025-12-16T12:26:05.967699184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:26:05.969142 containerd[1706]: time="2025-12-16T12:26:05.969110029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:26:05.980997 containerd[1706]: time="2025-12-16T12:26:05.980956267Z" level=info msg="CreateContainer within sandbox \"44163c4870950cafa4e094b3f6ed2310acaf88554be38f7162bcee2ea9509b98\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:26:05.991405 containerd[1706]: time="2025-12-16T12:26:05.991347660Z" level=info msg="Container b6a82c76554a23babb198de8ae05b38c33962911cb141d2e7ac16635f310ac8b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:05.994083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285051006.mount: Deactivated successfully. Dec 16 12:26:06.001752 containerd[1706]: time="2025-12-16T12:26:06.001688334Z" level=info msg="CreateContainer within sandbox \"44163c4870950cafa4e094b3f6ed2310acaf88554be38f7162bcee2ea9509b98\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b6a82c76554a23babb198de8ae05b38c33962911cb141d2e7ac16635f310ac8b\"" Dec 16 12:26:06.003801 containerd[1706]: time="2025-12-16T12:26:06.003596540Z" level=info msg="StartContainer for \"b6a82c76554a23babb198de8ae05b38c33962911cb141d2e7ac16635f310ac8b\"" Dec 16 12:26:06.006388 containerd[1706]: time="2025-12-16T12:26:06.006315829Z" level=info msg="connecting to shim b6a82c76554a23babb198de8ae05b38c33962911cb141d2e7ac16635f310ac8b" address="unix:///run/containerd/s/8f28d178fcdbf7be0f324ad40fa0e15b62600bc85ce1c654306440a61cd4f182" protocol=ttrpc version=3 Dec 16 12:26:06.032533 systemd[1]: Started cri-containerd-b6a82c76554a23babb198de8ae05b38c33962911cb141d2e7ac16635f310ac8b.scope - libcontainer container b6a82c76554a23babb198de8ae05b38c33962911cb141d2e7ac16635f310ac8b. Dec 16 12:26:06.051515 kernel: kauditd_printk_skb: 58 callbacks suppressed Dec 16 12:26:06.051614 kernel: audit: type=1334 audit(1765887966.049:556): prog-id=161 op=LOAD Dec 16 12:26:06.049000 audit: BPF prog-id=161 op=LOAD Dec 16 12:26:06.051000 audit: BPF prog-id=162 op=LOAD Dec 16 12:26:06.052875 kernel: audit: type=1334 audit(1765887966.051:557): prog-id=162 op=LOAD Dec 16 12:26:06.052910 kernel: audit: type=1300 audit(1765887966.051:557): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.051000 audit[3526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.059340 kernel: audit: type=1327 audit(1765887966.051:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.051000 audit: BPF prog-id=162 op=UNLOAD Dec 16 12:26:06.060499 kernel: audit: type=1334 audit(1765887966.051:558): prog-id=162 op=UNLOAD Dec 16 12:26:06.051000 audit[3526]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.063876 kernel: audit: type=1300 audit(1765887966.051:558): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.067070 kernel: audit: type=1327 audit(1765887966.051:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.051000 audit: BPF prog-id=163 op=LOAD Dec 16 12:26:06.067992 kernel: audit: type=1334 audit(1765887966.051:559): prog-id=163 op=LOAD Dec 16 12:26:06.068038 kernel: audit: type=1300 audit(1765887966.051:559): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.051000 audit[3526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.074781 kernel: audit: type=1327 audit(1765887966.051:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.052000 audit: BPF prog-id=164 op=LOAD Dec 16 12:26:06.052000 audit[3526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.055000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:26:06.055000 audit[3526]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.055000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:26:06.055000 audit[3526]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.055000 audit: BPF prog-id=165 op=LOAD Dec 16 12:26:06.055000 audit[3526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=3351 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:06.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613832633736353534613233626162623139386465386165303562 Dec 16 12:26:06.091043 containerd[1706]: time="2025-12-16T12:26:06.090996662Z" level=info msg="StartContainer for \"b6a82c76554a23babb198de8ae05b38c33962911cb141d2e7ac16635f310ac8b\" returns successfully" Dec 16 12:26:06.206982 kubelet[2901]: E1216 12:26:06.206925 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.207782 kubelet[2901]: W1216 12:26:06.207126 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.207782 kubelet[2901]: E1216 12:26:06.207153 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.208424 kubelet[2901]: E1216 12:26:06.208140 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.208553 kubelet[2901]: W1216 12:26:06.208507 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.208663 kubelet[2901]: E1216 12:26:06.208609 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.210075 kubelet[2901]: E1216 12:26:06.210055 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.210163 kubelet[2901]: W1216 12:26:06.210151 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.210217 kubelet[2901]: E1216 12:26:06.210207 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.210590 kubelet[2901]: E1216 12:26:06.210576 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.211470 kubelet[2901]: W1216 12:26:06.211141 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.211470 kubelet[2901]: E1216 12:26:06.211214 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.211641 kubelet[2901]: E1216 12:26:06.211616 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.211695 kubelet[2901]: W1216 12:26:06.211642 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.211695 kubelet[2901]: E1216 12:26:06.211658 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.212716 kubelet[2901]: E1216 12:26:06.212242 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.212716 kubelet[2901]: W1216 12:26:06.212264 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.212716 kubelet[2901]: E1216 12:26:06.212279 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.213505 kubelet[2901]: E1216 12:26:06.213464 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.213505 kubelet[2901]: W1216 12:26:06.213492 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.213505 kubelet[2901]: E1216 12:26:06.213506 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.213768 kubelet[2901]: E1216 12:26:06.213745 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.213768 kubelet[2901]: W1216 12:26:06.213762 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.213834 kubelet[2901]: E1216 12:26:06.213773 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.214413 kubelet[2901]: E1216 12:26:06.214383 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.214413 kubelet[2901]: W1216 12:26:06.214404 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.214413 kubelet[2901]: E1216 12:26:06.214419 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.214863 kubelet[2901]: E1216 12:26:06.214835 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.214863 kubelet[2901]: W1216 12:26:06.214855 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.214863 kubelet[2901]: E1216 12:26:06.214869 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.215274 kubelet[2901]: E1216 12:26:06.215249 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.215274 kubelet[2901]: W1216 12:26:06.215267 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.215390 kubelet[2901]: E1216 12:26:06.215280 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.215823 kubelet[2901]: E1216 12:26:06.215758 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.215885 kubelet[2901]: W1216 12:26:06.215848 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.215885 kubelet[2901]: E1216 12:26:06.215867 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.217427 kubelet[2901]: E1216 12:26:06.217394 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.217427 kubelet[2901]: W1216 12:26:06.217422 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.217522 kubelet[2901]: E1216 12:26:06.217443 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.218612 kubelet[2901]: E1216 12:26:06.218578 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.218612 kubelet[2901]: W1216 12:26:06.218611 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.218708 kubelet[2901]: E1216 12:26:06.218627 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.218907 kubelet[2901]: E1216 12:26:06.218880 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.218907 kubelet[2901]: W1216 12:26:06.218898 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.218907 kubelet[2901]: E1216 12:26:06.218909 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.235067 kubelet[2901]: E1216 12:26:06.235018 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.235067 kubelet[2901]: W1216 12:26:06.235048 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.235224 kubelet[2901]: E1216 12:26:06.235086 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.235632 kubelet[2901]: E1216 12:26:06.235601 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.235880 kubelet[2901]: W1216 12:26:06.235634 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.235880 kubelet[2901]: E1216 12:26:06.235651 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.236125 kubelet[2901]: E1216 12:26:06.236106 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.236198 kubelet[2901]: W1216 12:26:06.236185 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.236257 kubelet[2901]: E1216 12:26:06.236245 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.236523 kubelet[2901]: E1216 12:26:06.236509 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.236587 kubelet[2901]: W1216 12:26:06.236576 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.236676 kubelet[2901]: E1216 12:26:06.236660 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.237221 kubelet[2901]: E1216 12:26:06.237105 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.237221 kubelet[2901]: W1216 12:26:06.237121 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.237221 kubelet[2901]: E1216 12:26:06.237132 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.237429 kubelet[2901]: E1216 12:26:06.237415 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.237502 kubelet[2901]: W1216 12:26:06.237491 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.237701 kubelet[2901]: E1216 12:26:06.237593 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.238476 kubelet[2901]: E1216 12:26:06.237895 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.238586 kubelet[2901]: W1216 12:26:06.238567 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.238643 kubelet[2901]: E1216 12:26:06.238632 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.238996 kubelet[2901]: E1216 12:26:06.238887 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.238996 kubelet[2901]: W1216 12:26:06.238899 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.238996 kubelet[2901]: E1216 12:26:06.238909 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.239183 kubelet[2901]: E1216 12:26:06.239170 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.239240 kubelet[2901]: W1216 12:26:06.239229 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.239326 kubelet[2901]: E1216 12:26:06.239283 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.239634 kubelet[2901]: E1216 12:26:06.239537 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.239634 kubelet[2901]: W1216 12:26:06.239549 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.239634 kubelet[2901]: E1216 12:26:06.239559 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.239804 kubelet[2901]: E1216 12:26:06.239792 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.239853 kubelet[2901]: W1216 12:26:06.239844 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.239904 kubelet[2901]: E1216 12:26:06.239895 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.240410 kubelet[2901]: E1216 12:26:06.240125 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.240410 kubelet[2901]: W1216 12:26:06.240139 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.240410 kubelet[2901]: E1216 12:26:06.240148 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.241263 kubelet[2901]: E1216 12:26:06.241209 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.241263 kubelet[2901]: W1216 12:26:06.241235 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.241263 kubelet[2901]: E1216 12:26:06.241253 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.241513 kubelet[2901]: E1216 12:26:06.241493 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.241513 kubelet[2901]: W1216 12:26:06.241507 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.241605 kubelet[2901]: E1216 12:26:06.241519 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.242508 kubelet[2901]: E1216 12:26:06.242385 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.242508 kubelet[2901]: W1216 12:26:06.242403 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.242508 kubelet[2901]: E1216 12:26:06.242419 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.242659 kubelet[2901]: E1216 12:26:06.242642 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.242659 kubelet[2901]: W1216 12:26:06.242654 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.243422 kubelet[2901]: E1216 12:26:06.242663 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.243422 kubelet[2901]: E1216 12:26:06.242854 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.243422 kubelet[2901]: W1216 12:26:06.242866 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.243422 kubelet[2901]: E1216 12:26:06.242875 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:06.243422 kubelet[2901]: E1216 12:26:06.243068 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:06.243422 kubelet[2901]: W1216 12:26:06.243083 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:06.243422 kubelet[2901]: E1216 12:26:06.243096 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.055700 kubelet[2901]: E1216 12:26:07.055643 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:07.138061 kubelet[2901]: I1216 12:26:07.137691 2901 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:26:07.224790 kubelet[2901]: E1216 12:26:07.224763 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.225199 kubelet[2901]: W1216 12:26:07.225177 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.225325 kubelet[2901]: E1216 12:26:07.225308 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.225606 kubelet[2901]: E1216 12:26:07.225593 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.225720 kubelet[2901]: W1216 12:26:07.225666 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.225720 kubelet[2901]: E1216 12:26:07.225682 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.226038 kubelet[2901]: E1216 12:26:07.225972 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.226038 kubelet[2901]: W1216 12:26:07.225985 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.226038 kubelet[2901]: E1216 12:26:07.225996 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.226480 kubelet[2901]: E1216 12:26:07.226396 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.226480 kubelet[2901]: W1216 12:26:07.226410 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.226480 kubelet[2901]: E1216 12:26:07.226422 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.226791 kubelet[2901]: E1216 12:26:07.226733 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.226791 kubelet[2901]: W1216 12:26:07.226745 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.226791 kubelet[2901]: E1216 12:26:07.226756 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.227104 kubelet[2901]: E1216 12:26:07.227045 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.227104 kubelet[2901]: W1216 12:26:07.227058 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.227104 kubelet[2901]: E1216 12:26:07.227069 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.227413 kubelet[2901]: E1216 12:26:07.227400 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.227529 kubelet[2901]: W1216 12:26:07.227472 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.227529 kubelet[2901]: E1216 12:26:07.227487 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.228447 kubelet[2901]: E1216 12:26:07.228369 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.228447 kubelet[2901]: W1216 12:26:07.228383 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.228447 kubelet[2901]: E1216 12:26:07.228394 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.228780 kubelet[2901]: E1216 12:26:07.228766 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.228901 kubelet[2901]: W1216 12:26:07.228847 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.228901 kubelet[2901]: E1216 12:26:07.228864 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.229221 kubelet[2901]: E1216 12:26:07.229207 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.229392 kubelet[2901]: W1216 12:26:07.229306 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.229392 kubelet[2901]: E1216 12:26:07.229324 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.229701 kubelet[2901]: E1216 12:26:07.229646 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.229701 kubelet[2901]: W1216 12:26:07.229658 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.229701 kubelet[2901]: E1216 12:26:07.229669 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.230130 kubelet[2901]: E1216 12:26:07.230056 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.230130 kubelet[2901]: W1216 12:26:07.230070 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.230130 kubelet[2901]: E1216 12:26:07.230082 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.230517 kubelet[2901]: E1216 12:26:07.230453 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.230517 kubelet[2901]: W1216 12:26:07.230466 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.230517 kubelet[2901]: E1216 12:26:07.230478 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.230798 kubelet[2901]: E1216 12:26:07.230784 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.230902 kubelet[2901]: W1216 12:26:07.230864 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.230960 kubelet[2901]: E1216 12:26:07.230948 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.231241 kubelet[2901]: E1216 12:26:07.231180 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.231241 kubelet[2901]: W1216 12:26:07.231191 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.231241 kubelet[2901]: E1216 12:26:07.231201 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.248761 kubelet[2901]: E1216 12:26:07.248732 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.248761 kubelet[2901]: W1216 12:26:07.248755 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.248761 kubelet[2901]: E1216 12:26:07.248773 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.250681 kubelet[2901]: E1216 12:26:07.250651 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.250681 kubelet[2901]: W1216 12:26:07.250675 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.250995 kubelet[2901]: E1216 12:26:07.250693 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.251249 kubelet[2901]: E1216 12:26:07.251230 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.251366 kubelet[2901]: W1216 12:26:07.251281 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.251366 kubelet[2901]: E1216 12:26:07.251308 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.251752 kubelet[2901]: E1216 12:26:07.251712 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.251752 kubelet[2901]: W1216 12:26:07.251727 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.251990 kubelet[2901]: E1216 12:26:07.251739 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.252224 kubelet[2901]: E1216 12:26:07.252207 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.252412 kubelet[2901]: W1216 12:26:07.252304 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.252412 kubelet[2901]: E1216 12:26:07.252324 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.252695 kubelet[2901]: E1216 12:26:07.252678 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.252897 kubelet[2901]: W1216 12:26:07.252761 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.252897 kubelet[2901]: E1216 12:26:07.252786 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.253161 kubelet[2901]: E1216 12:26:07.253118 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.253161 kubelet[2901]: W1216 12:26:07.253134 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.253161 kubelet[2901]: E1216 12:26:07.253146 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.253635 kubelet[2901]: E1216 12:26:07.253617 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.253794 kubelet[2901]: W1216 12:26:07.253717 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.253794 kubelet[2901]: E1216 12:26:07.253733 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.254468 kubelet[2901]: E1216 12:26:07.254447 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.254854 kubelet[2901]: W1216 12:26:07.254710 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.254854 kubelet[2901]: E1216 12:26:07.254733 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.255152 kubelet[2901]: E1216 12:26:07.255057 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.255152 kubelet[2901]: W1216 12:26:07.255104 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.255370 kubelet[2901]: E1216 12:26:07.255241 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.255684 kubelet[2901]: E1216 12:26:07.255666 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.255778 kubelet[2901]: W1216 12:26:07.255744 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.255778 kubelet[2901]: E1216 12:26:07.255764 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.256143 kubelet[2901]: E1216 12:26:07.256125 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.256309 kubelet[2901]: W1216 12:26:07.256217 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.256309 kubelet[2901]: E1216 12:26:07.256235 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.256722 kubelet[2901]: E1216 12:26:07.256671 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.256722 kubelet[2901]: W1216 12:26:07.256687 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.256722 kubelet[2901]: E1216 12:26:07.256699 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.257176 kubelet[2901]: E1216 12:26:07.257123 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.257176 kubelet[2901]: W1216 12:26:07.257142 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.257176 kubelet[2901]: E1216 12:26:07.257154 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.257703 kubelet[2901]: E1216 12:26:07.257686 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.257855 kubelet[2901]: W1216 12:26:07.257790 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.257855 kubelet[2901]: E1216 12:26:07.257810 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.258628 kubelet[2901]: E1216 12:26:07.258608 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.258941 kubelet[2901]: W1216 12:26:07.258704 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.258941 kubelet[2901]: E1216 12:26:07.258724 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.259216 kubelet[2901]: E1216 12:26:07.259200 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.259362 kubelet[2901]: W1216 12:26:07.259242 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.259362 kubelet[2901]: E1216 12:26:07.259257 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.259720 kubelet[2901]: E1216 12:26:07.259660 2901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:07.259720 kubelet[2901]: W1216 12:26:07.259674 2901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:07.259720 kubelet[2901]: E1216 12:26:07.259686 2901 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:07.269594 containerd[1706]: time="2025-12-16T12:26:07.269525941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:07.271343 containerd[1706]: time="2025-12-16T12:26:07.271256267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:07.273018 containerd[1706]: time="2025-12-16T12:26:07.272928672Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:07.276436 containerd[1706]: time="2025-12-16T12:26:07.276361963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:07.276930 containerd[1706]: time="2025-12-16T12:26:07.276853005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.307711016s" Dec 16 12:26:07.276930 containerd[1706]: time="2025-12-16T12:26:07.276889405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:26:07.281649 containerd[1706]: time="2025-12-16T12:26:07.281577140Z" level=info msg="CreateContainer within sandbox \"16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:26:07.292327 containerd[1706]: time="2025-12-16T12:26:07.291862013Z" level=info msg="Container 10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:07.302457 containerd[1706]: time="2025-12-16T12:26:07.302412207Z" level=info msg="CreateContainer within sandbox \"16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753\"" Dec 16 12:26:07.303201 containerd[1706]: time="2025-12-16T12:26:07.303177849Z" level=info msg="StartContainer for \"10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753\"" Dec 16 12:26:07.304804 containerd[1706]: time="2025-12-16T12:26:07.304775935Z" level=info msg="connecting to shim 10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753" address="unix:///run/containerd/s/ae085f8d0b1617ae6daaed001441064d84dad5f77785dc3b2834c22964da76da" protocol=ttrpc version=3 Dec 16 12:26:07.328538 systemd[1]: Started cri-containerd-10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753.scope - libcontainer container 10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753. Dec 16 12:26:07.385000 audit: BPF prog-id=166 op=LOAD Dec 16 12:26:07.385000 audit[3636]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3412 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:07.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130313539616661303037383762373564376264396364376263333530 Dec 16 12:26:07.385000 audit: BPF prog-id=167 op=LOAD Dec 16 12:26:07.385000 audit[3636]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3412 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:07.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130313539616661303037383762373564376264396364376263333530 Dec 16 12:26:07.385000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:26:07.385000 audit[3636]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:07.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130313539616661303037383762373564376264396364376263333530 Dec 16 12:26:07.385000 audit: BPF prog-id=166 op=UNLOAD Dec 16 12:26:07.385000 audit[3636]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:07.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130313539616661303037383762373564376264396364376263333530 Dec 16 12:26:07.385000 audit: BPF prog-id=168 op=LOAD Dec 16 12:26:07.385000 audit[3636]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3412 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:07.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130313539616661303037383762373564376264396364376263333530 Dec 16 12:26:07.406207 containerd[1706]: time="2025-12-16T12:26:07.406151581Z" level=info msg="StartContainer for \"10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753\" returns successfully" Dec 16 12:26:07.418520 systemd[1]: cri-containerd-10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753.scope: Deactivated successfully. Dec 16 12:26:07.423189 containerd[1706]: time="2025-12-16T12:26:07.423151196Z" level=info msg="received container exit event container_id:\"10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753\" id:\"10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753\" pid:3648 exited_at:{seconds:1765887967 nanos:422626475}" Dec 16 12:26:07.424000 audit: BPF prog-id=168 op=UNLOAD Dec 16 12:26:07.443979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10159afa00787b75d7bd9cd7bc350d3d1cb565b78af2d2b750229f85d72fa753-rootfs.mount: Deactivated successfully. Dec 16 12:26:08.143456 containerd[1706]: time="2025-12-16T12:26:08.143415005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:26:08.160421 kubelet[2901]: I1216 12:26:08.160317 2901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-559c6f94f4-zntv4" podStartSLOduration=3.5247466039999997 podStartE2EDuration="6.16028682s" podCreationTimestamp="2025-12-16 12:26:02 +0000 UTC" firstStartedPulling="2025-12-16 12:26:03.333276292 +0000 UTC m=+23.366471445" lastFinishedPulling="2025-12-16 12:26:05.968816508 +0000 UTC m=+26.002011661" observedRunningTime="2025-12-16 12:26:06.152247819 +0000 UTC m=+26.185443012" watchObservedRunningTime="2025-12-16 12:26:08.16028682 +0000 UTC m=+28.193481973" Dec 16 12:26:09.056168 kubelet[2901]: E1216 12:26:09.056070 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:10.557456 containerd[1706]: time="2025-12-16T12:26:10.557405238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:10.558334 containerd[1706]: time="2025-12-16T12:26:10.558283121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:26:10.559637 containerd[1706]: time="2025-12-16T12:26:10.559540285Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:10.562874 containerd[1706]: time="2025-12-16T12:26:10.562601495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:10.563331 containerd[1706]: time="2025-12-16T12:26:10.563305577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.419831652s" Dec 16 12:26:10.563387 containerd[1706]: time="2025-12-16T12:26:10.563332897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:26:10.568884 containerd[1706]: time="2025-12-16T12:26:10.568824635Z" level=info msg="CreateContainer within sandbox \"16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:26:10.579440 containerd[1706]: time="2025-12-16T12:26:10.577999105Z" level=info msg="Container 85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:10.588140 containerd[1706]: time="2025-12-16T12:26:10.588069297Z" level=info msg="CreateContainer within sandbox \"16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3\"" Dec 16 12:26:10.588785 containerd[1706]: time="2025-12-16T12:26:10.588731299Z" level=info msg="StartContainer for \"85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3\"" Dec 16 12:26:10.590461 containerd[1706]: time="2025-12-16T12:26:10.590428345Z" level=info msg="connecting to shim 85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3" address="unix:///run/containerd/s/ae085f8d0b1617ae6daaed001441064d84dad5f77785dc3b2834c22964da76da" protocol=ttrpc version=3 Dec 16 12:26:10.609504 systemd[1]: Started cri-containerd-85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3.scope - libcontainer container 85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3. Dec 16 12:26:10.655000 audit: BPF prog-id=169 op=LOAD Dec 16 12:26:10.655000 audit[3696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3412 pid=3696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:10.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383636643733633965343739656138313264353961316439353439 Dec 16 12:26:10.655000 audit: BPF prog-id=170 op=LOAD Dec 16 12:26:10.655000 audit[3696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3412 pid=3696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:10.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383636643733633965343739656138313264353961316439353439 Dec 16 12:26:10.655000 audit: BPF prog-id=170 op=UNLOAD Dec 16 12:26:10.655000 audit[3696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=3696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:10.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383636643733633965343739656138313264353961316439353439 Dec 16 12:26:10.655000 audit: BPF prog-id=169 op=UNLOAD Dec 16 12:26:10.655000 audit[3696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=3696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:10.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383636643733633965343739656138313264353961316439353439 Dec 16 12:26:10.655000 audit: BPF prog-id=171 op=LOAD Dec 16 12:26:10.655000 audit[3696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3412 pid=3696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:10.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383636643733633965343739656138313264353961316439353439 Dec 16 12:26:10.674246 containerd[1706]: time="2025-12-16T12:26:10.674206055Z" level=info msg="StartContainer for \"85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3\" returns successfully" Dec 16 12:26:11.055593 kubelet[2901]: E1216 12:26:11.055513 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:11.935947 containerd[1706]: time="2025-12-16T12:26:11.935894601Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:26:11.937916 systemd[1]: cri-containerd-85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3.scope: Deactivated successfully. Dec 16 12:26:11.938264 systemd[1]: cri-containerd-85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3.scope: Consumed 479ms CPU time, 188M memory peak, 165.9M written to disk. Dec 16 12:26:11.940451 containerd[1706]: time="2025-12-16T12:26:11.940410776Z" level=info msg="received container exit event container_id:\"85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3\" id:\"85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3\" pid:3709 exited_at:{seconds:1765887971 nanos:940152455}" Dec 16 12:26:11.942850 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 16 12:26:11.943007 kernel: audit: type=1334 audit(1765887971.941:575): prog-id=171 op=UNLOAD Dec 16 12:26:11.941000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:26:11.959339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85866d73c9e479ea812d59a1d954983c55873855d408534f33b7c1c7525907c3-rootfs.mount: Deactivated successfully. Dec 16 12:26:11.974216 kubelet[2901]: I1216 12:26:11.974186 2901 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 12:26:13.047159 systemd[1]: Created slice kubepods-besteffort-pod1da2d440_02fc_4a40_abf1_80ffcd9275c1.slice - libcontainer container kubepods-besteffort-pod1da2d440_02fc_4a40_abf1_80ffcd9275c1.slice. Dec 16 12:26:13.054792 systemd[1]: Created slice kubepods-besteffort-pod06792be6_fad3_4b79_a250_73afb10c06a6.slice - libcontainer container kubepods-besteffort-pod06792be6_fad3_4b79_a250_73afb10c06a6.slice. Dec 16 12:26:13.091616 kubelet[2901]: I1216 12:26:13.091365 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/06792be6-fad3-4b79-a250-73afb10c06a6-calico-apiserver-certs\") pod \"calico-apiserver-65dbdbb8c6-9lrp5\" (UID: \"06792be6-fad3-4b79-a250-73afb10c06a6\") " pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" Dec 16 12:26:13.091616 kubelet[2901]: I1216 12:26:13.091416 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4mc2\" (UniqueName: \"kubernetes.io/projected/06792be6-fad3-4b79-a250-73afb10c06a6-kube-api-access-k4mc2\") pod \"calico-apiserver-65dbdbb8c6-9lrp5\" (UID: \"06792be6-fad3-4b79-a250-73afb10c06a6\") " pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" Dec 16 12:26:13.097214 systemd[1]: Created slice kubepods-burstable-podd35c392d_b26a_4874_a153_e50017e8ee4f.slice - libcontainer container kubepods-burstable-podd35c392d_b26a_4874_a153_e50017e8ee4f.slice. Dec 16 12:26:13.192259 kubelet[2901]: I1216 12:26:13.192151 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d35c392d-b26a-4874-a153-e50017e8ee4f-config-volume\") pod \"coredns-66bc5c9577-dxpqs\" (UID: \"d35c392d-b26a-4874-a153-e50017e8ee4f\") " pod="kube-system/coredns-66bc5c9577-dxpqs" Dec 16 12:26:13.192259 kubelet[2901]: I1216 12:26:13.192251 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdl66\" (UniqueName: \"kubernetes.io/projected/d35c392d-b26a-4874-a153-e50017e8ee4f-kube-api-access-vdl66\") pod \"coredns-66bc5c9577-dxpqs\" (UID: \"d35c392d-b26a-4874-a153-e50017e8ee4f\") " pod="kube-system/coredns-66bc5c9577-dxpqs" Dec 16 12:26:13.399810 containerd[1706]: time="2025-12-16T12:26:13.399475159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8fpw,Uid:1da2d440-02fc-4a40-abf1-80ffcd9275c1,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:13.402883 containerd[1706]: time="2025-12-16T12:26:13.402633849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dbdbb8c6-9lrp5,Uid:06792be6-fad3-4b79-a250-73afb10c06a6,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:13.403402 systemd[1]: Created slice kubepods-burstable-podaaeeb48b_578f_4e35_a67a_4bb9f3df97da.slice - libcontainer container kubepods-burstable-podaaeeb48b_578f_4e35_a67a_4bb9f3df97da.slice. Dec 16 12:26:13.408930 containerd[1706]: time="2025-12-16T12:26:13.408863949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dxpqs,Uid:d35c392d-b26a-4874-a153-e50017e8ee4f,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:13.419166 systemd[1]: Created slice kubepods-besteffort-podf8c50491_6041_443b_aedf_13a9fee1a718.slice - libcontainer container kubepods-besteffort-podf8c50491_6041_443b_aedf_13a9fee1a718.slice. Dec 16 12:26:13.436990 systemd[1]: Created slice kubepods-besteffort-podea12a60c_4683_4d5e_8e8f_9b466a85a781.slice - libcontainer container kubepods-besteffort-podea12a60c_4683_4d5e_8e8f_9b466a85a781.slice. Dec 16 12:26:13.442964 systemd[1]: Created slice kubepods-besteffort-pod5b2b3263_cc70_4a4f_a835_4543e7a31ab8.slice - libcontainer container kubepods-besteffort-pod5b2b3263_cc70_4a4f_a835_4543e7a31ab8.slice. Dec 16 12:26:13.450835 systemd[1]: Created slice kubepods-besteffort-podad325ba6_769c_4666_b200_58acf598b30f.slice - libcontainer container kubepods-besteffort-podad325ba6_769c_4666_b200_58acf598b30f.slice. Dec 16 12:26:13.494246 kubelet[2901]: I1216 12:26:13.494192 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8c50491-6041-443b-aedf-13a9fee1a718-tigera-ca-bundle\") pod \"calico-kube-controllers-758f6dbc5-vnxvc\" (UID: \"f8c50491-6041-443b-aedf-13a9fee1a718\") " pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" Dec 16 12:26:13.494246 kubelet[2901]: I1216 12:26:13.494241 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5b2b3263-cc70-4a4f-a835-4543e7a31ab8-goldmane-key-pair\") pod \"goldmane-7c778bb748-h72ht\" (UID: \"5b2b3263-cc70-4a4f-a835-4543e7a31ab8\") " pod="calico-system/goldmane-7c778bb748-h72ht" Dec 16 12:26:13.494453 kubelet[2901]: I1216 12:26:13.494304 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lmcs\" (UniqueName: \"kubernetes.io/projected/ea12a60c-4683-4d5e-8e8f-9b466a85a781-kube-api-access-8lmcs\") pod \"calico-apiserver-65dbdbb8c6-62gh6\" (UID: \"ea12a60c-4683-4d5e-8e8f-9b466a85a781\") " pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" Dec 16 12:26:13.494453 kubelet[2901]: I1216 12:26:13.494329 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98cvk\" (UniqueName: \"kubernetes.io/projected/5b2b3263-cc70-4a4f-a835-4543e7a31ab8-kube-api-access-98cvk\") pod \"goldmane-7c778bb748-h72ht\" (UID: \"5b2b3263-cc70-4a4f-a835-4543e7a31ab8\") " pod="calico-system/goldmane-7c778bb748-h72ht" Dec 16 12:26:13.494453 kubelet[2901]: I1216 12:26:13.494352 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ad325ba6-769c-4666-b200-58acf598b30f-whisker-backend-key-pair\") pod \"whisker-c64ff87c5-sdxg7\" (UID: \"ad325ba6-769c-4666-b200-58acf598b30f\") " pod="calico-system/whisker-c64ff87c5-sdxg7" Dec 16 12:26:13.494453 kubelet[2901]: I1216 12:26:13.494378 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaeeb48b-578f-4e35-a67a-4bb9f3df97da-config-volume\") pod \"coredns-66bc5c9577-9ck8v\" (UID: \"aaeeb48b-578f-4e35-a67a-4bb9f3df97da\") " pod="kube-system/coredns-66bc5c9577-9ck8v" Dec 16 12:26:13.494453 kubelet[2901]: I1216 12:26:13.494397 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rww55\" (UniqueName: \"kubernetes.io/projected/f8c50491-6041-443b-aedf-13a9fee1a718-kube-api-access-rww55\") pod \"calico-kube-controllers-758f6dbc5-vnxvc\" (UID: \"f8c50491-6041-443b-aedf-13a9fee1a718\") " pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" Dec 16 12:26:13.494575 kubelet[2901]: I1216 12:26:13.494414 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxd9z\" (UniqueName: \"kubernetes.io/projected/aaeeb48b-578f-4e35-a67a-4bb9f3df97da-kube-api-access-fxd9z\") pod \"coredns-66bc5c9577-9ck8v\" (UID: \"aaeeb48b-578f-4e35-a67a-4bb9f3df97da\") " pod="kube-system/coredns-66bc5c9577-9ck8v" Dec 16 12:26:13.494575 kubelet[2901]: I1216 12:26:13.494428 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad325ba6-769c-4666-b200-58acf598b30f-whisker-ca-bundle\") pod \"whisker-c64ff87c5-sdxg7\" (UID: \"ad325ba6-769c-4666-b200-58acf598b30f\") " pod="calico-system/whisker-c64ff87c5-sdxg7" Dec 16 12:26:13.494575 kubelet[2901]: I1216 12:26:13.494445 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2b3263-cc70-4a4f-a835-4543e7a31ab8-config\") pod \"goldmane-7c778bb748-h72ht\" (UID: \"5b2b3263-cc70-4a4f-a835-4543e7a31ab8\") " pod="calico-system/goldmane-7c778bb748-h72ht" Dec 16 12:26:13.494575 kubelet[2901]: I1216 12:26:13.494460 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b2b3263-cc70-4a4f-a835-4543e7a31ab8-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-h72ht\" (UID: \"5b2b3263-cc70-4a4f-a835-4543e7a31ab8\") " pod="calico-system/goldmane-7c778bb748-h72ht" Dec 16 12:26:13.494575 kubelet[2901]: I1216 12:26:13.494477 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea12a60c-4683-4d5e-8e8f-9b466a85a781-calico-apiserver-certs\") pod \"calico-apiserver-65dbdbb8c6-62gh6\" (UID: \"ea12a60c-4683-4d5e-8e8f-9b466a85a781\") " pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" Dec 16 12:26:13.494747 kubelet[2901]: I1216 12:26:13.494494 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzkr\" (UniqueName: \"kubernetes.io/projected/ad325ba6-769c-4666-b200-58acf598b30f-kube-api-access-hgzkr\") pod \"whisker-c64ff87c5-sdxg7\" (UID: \"ad325ba6-769c-4666-b200-58acf598b30f\") " pod="calico-system/whisker-c64ff87c5-sdxg7" Dec 16 12:26:13.515805 containerd[1706]: time="2025-12-16T12:26:13.515756853Z" level=error msg="Failed to destroy network for sandbox \"b304a7e46ff35597efaf503820d780dede2ab618683846d66de12686e7b7dd68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.516351 containerd[1706]: time="2025-12-16T12:26:13.516130615Z" level=error msg="Failed to destroy network for sandbox \"c4982ef343dec2489c458dd8d2972c7f0ebe9a9087ce96d6edd6deee34492ccb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.518507 containerd[1706]: time="2025-12-16T12:26:13.518467542Z" level=error msg="Failed to destroy network for sandbox \"7cfdda9f717081ab70db17336a2c817ba80c54e6266881cd7a46d7e257a773ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.518787 containerd[1706]: time="2025-12-16T12:26:13.518750783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dbdbb8c6-9lrp5,Uid:06792be6-fad3-4b79-a250-73afb10c06a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b304a7e46ff35597efaf503820d780dede2ab618683846d66de12686e7b7dd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.519145 kubelet[2901]: E1216 12:26:13.519082 2901 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b304a7e46ff35597efaf503820d780dede2ab618683846d66de12686e7b7dd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.519213 kubelet[2901]: E1216 12:26:13.519172 2901 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b304a7e46ff35597efaf503820d780dede2ab618683846d66de12686e7b7dd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" Dec 16 12:26:13.519213 kubelet[2901]: E1216 12:26:13.519192 2901 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b304a7e46ff35597efaf503820d780dede2ab618683846d66de12686e7b7dd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" Dec 16 12:26:13.519276 kubelet[2901]: E1216 12:26:13.519253 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65dbdbb8c6-9lrp5_calico-apiserver(06792be6-fad3-4b79-a250-73afb10c06a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65dbdbb8c6-9lrp5_calico-apiserver(06792be6-fad3-4b79-a250-73afb10c06a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b304a7e46ff35597efaf503820d780dede2ab618683846d66de12686e7b7dd68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:26:13.521515 containerd[1706]: time="2025-12-16T12:26:13.521474512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8fpw,Uid:1da2d440-02fc-4a40-abf1-80ffcd9275c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4982ef343dec2489c458dd8d2972c7f0ebe9a9087ce96d6edd6deee34492ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.522002 kubelet[2901]: E1216 12:26:13.521942 2901 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4982ef343dec2489c458dd8d2972c7f0ebe9a9087ce96d6edd6deee34492ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.522073 kubelet[2901]: E1216 12:26:13.522018 2901 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4982ef343dec2489c458dd8d2972c7f0ebe9a9087ce96d6edd6deee34492ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k8fpw" Dec 16 12:26:13.522073 kubelet[2901]: E1216 12:26:13.522040 2901 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4982ef343dec2489c458dd8d2972c7f0ebe9a9087ce96d6edd6deee34492ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k8fpw" Dec 16 12:26:13.522132 kubelet[2901]: E1216 12:26:13.522107 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4982ef343dec2489c458dd8d2972c7f0ebe9a9087ce96d6edd6deee34492ccb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:13.524493 containerd[1706]: time="2025-12-16T12:26:13.524432561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dxpqs,Uid:d35c392d-b26a-4874-a153-e50017e8ee4f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfdda9f717081ab70db17336a2c817ba80c54e6266881cd7a46d7e257a773ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.524701 kubelet[2901]: E1216 12:26:13.524647 2901 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfdda9f717081ab70db17336a2c817ba80c54e6266881cd7a46d7e257a773ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.524752 kubelet[2901]: E1216 12:26:13.524712 2901 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfdda9f717081ab70db17336a2c817ba80c54e6266881cd7a46d7e257a773ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dxpqs" Dec 16 12:26:13.524752 kubelet[2901]: E1216 12:26:13.524738 2901 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfdda9f717081ab70db17336a2c817ba80c54e6266881cd7a46d7e257a773ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dxpqs" Dec 16 12:26:13.524946 kubelet[2901]: E1216 12:26:13.524917 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dxpqs_kube-system(d35c392d-b26a-4874-a153-e50017e8ee4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dxpqs_kube-system(d35c392d-b26a-4874-a153-e50017e8ee4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cfdda9f717081ab70db17336a2c817ba80c54e6266881cd7a46d7e257a773ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dxpqs" podUID="d35c392d-b26a-4874-a153-e50017e8ee4f" Dec 16 12:26:13.711488 containerd[1706]: time="2025-12-16T12:26:13.711370924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9ck8v,Uid:aaeeb48b-578f-4e35-a67a-4bb9f3df97da,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:13.730398 containerd[1706]: time="2025-12-16T12:26:13.730353505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f6dbc5-vnxvc,Uid:f8c50491-6041-443b-aedf-13a9fee1a718,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:13.745171 containerd[1706]: time="2025-12-16T12:26:13.744991672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dbdbb8c6-62gh6,Uid:ea12a60c-4683-4d5e-8e8f-9b466a85a781,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:13.751963 containerd[1706]: time="2025-12-16T12:26:13.751913295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h72ht,Uid:5b2b3263-cc70-4a4f-a835-4543e7a31ab8,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:13.758167 containerd[1706]: time="2025-12-16T12:26:13.758134475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c64ff87c5-sdxg7,Uid:ad325ba6-769c-4666-b200-58acf598b30f,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:13.772584 containerd[1706]: time="2025-12-16T12:26:13.772524481Z" level=error msg="Failed to destroy network for sandbox \"5f23f8400537db6b861961a7eca06f8df896861689846ab3fa4822e0d5377275\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.779228 containerd[1706]: time="2025-12-16T12:26:13.779159662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9ck8v,Uid:aaeeb48b-578f-4e35-a67a-4bb9f3df97da,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f23f8400537db6b861961a7eca06f8df896861689846ab3fa4822e0d5377275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.779451 kubelet[2901]: E1216 12:26:13.779414 2901 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f23f8400537db6b861961a7eca06f8df896861689846ab3fa4822e0d5377275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.779502 kubelet[2901]: E1216 12:26:13.779470 2901 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f23f8400537db6b861961a7eca06f8df896861689846ab3fa4822e0d5377275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9ck8v" Dec 16 12:26:13.779502 kubelet[2901]: E1216 12:26:13.779489 2901 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f23f8400537db6b861961a7eca06f8df896861689846ab3fa4822e0d5377275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9ck8v" Dec 16 12:26:13.779666 kubelet[2901]: E1216 12:26:13.779541 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9ck8v_kube-system(aaeeb48b-578f-4e35-a67a-4bb9f3df97da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9ck8v_kube-system(aaeeb48b-578f-4e35-a67a-4bb9f3df97da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f23f8400537db6b861961a7eca06f8df896861689846ab3fa4822e0d5377275\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9ck8v" podUID="aaeeb48b-578f-4e35-a67a-4bb9f3df97da" Dec 16 12:26:13.812039 containerd[1706]: time="2025-12-16T12:26:13.811971648Z" level=error msg="Failed to destroy network for sandbox \"8f5141aae12e50108063438344f6561e1e598ecf65db54083c2f0a92a068f3fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.816042 containerd[1706]: time="2025-12-16T12:26:13.815984381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f6dbc5-vnxvc,Uid:f8c50491-6041-443b-aedf-13a9fee1a718,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5141aae12e50108063438344f6561e1e598ecf65db54083c2f0a92a068f3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.816478 kubelet[2901]: E1216 12:26:13.816440 2901 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5141aae12e50108063438344f6561e1e598ecf65db54083c2f0a92a068f3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.816546 kubelet[2901]: E1216 12:26:13.816503 2901 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5141aae12e50108063438344f6561e1e598ecf65db54083c2f0a92a068f3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" Dec 16 12:26:13.816546 kubelet[2901]: E1216 12:26:13.816537 2901 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5141aae12e50108063438344f6561e1e598ecf65db54083c2f0a92a068f3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" Dec 16 12:26:13.816716 kubelet[2901]: E1216 12:26:13.816687 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-758f6dbc5-vnxvc_calico-system(f8c50491-6041-443b-aedf-13a9fee1a718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-758f6dbc5-vnxvc_calico-system(f8c50491-6041-443b-aedf-13a9fee1a718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f5141aae12e50108063438344f6561e1e598ecf65db54083c2f0a92a068f3fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:26:13.828557 containerd[1706]: time="2025-12-16T12:26:13.828440101Z" level=error msg="Failed to destroy network for sandbox \"29cc18071b28272611e98e0ec83f7dd5afcec3f63d8ccf38b47f3b6e83fb396b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.831406 containerd[1706]: time="2025-12-16T12:26:13.831369591Z" level=error msg="Failed to destroy network for sandbox \"f9cccce3eb283fa8e4d20dcc919debcd2f626136eada9a9d75b7cc1338f8e209\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.832535 containerd[1706]: time="2025-12-16T12:26:13.832499754Z" level=error msg="Failed to destroy network for sandbox \"067d67a48a85a068d269b95fe464b371a8389bb6ae197f06741fad2528301cd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.834150 containerd[1706]: time="2025-12-16T12:26:13.834119600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c64ff87c5-sdxg7,Uid:ad325ba6-769c-4666-b200-58acf598b30f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29cc18071b28272611e98e0ec83f7dd5afcec3f63d8ccf38b47f3b6e83fb396b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.834382 kubelet[2901]: E1216 12:26:13.834347 2901 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29cc18071b28272611e98e0ec83f7dd5afcec3f63d8ccf38b47f3b6e83fb396b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.834429 kubelet[2901]: E1216 12:26:13.834403 2901 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29cc18071b28272611e98e0ec83f7dd5afcec3f63d8ccf38b47f3b6e83fb396b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c64ff87c5-sdxg7" Dec 16 12:26:13.834429 kubelet[2901]: E1216 12:26:13.834420 2901 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29cc18071b28272611e98e0ec83f7dd5afcec3f63d8ccf38b47f3b6e83fb396b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c64ff87c5-sdxg7" Dec 16 12:26:13.834489 kubelet[2901]: E1216 12:26:13.834468 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c64ff87c5-sdxg7_calico-system(ad325ba6-769c-4666-b200-58acf598b30f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c64ff87c5-sdxg7_calico-system(ad325ba6-769c-4666-b200-58acf598b30f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29cc18071b28272611e98e0ec83f7dd5afcec3f63d8ccf38b47f3b6e83fb396b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c64ff87c5-sdxg7" podUID="ad325ba6-769c-4666-b200-58acf598b30f" Dec 16 12:26:13.836206 containerd[1706]: time="2025-12-16T12:26:13.836165006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dbdbb8c6-62gh6,Uid:ea12a60c-4683-4d5e-8e8f-9b466a85a781,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cccce3eb283fa8e4d20dcc919debcd2f626136eada9a9d75b7cc1338f8e209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.836478 kubelet[2901]: E1216 12:26:13.836442 2901 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cccce3eb283fa8e4d20dcc919debcd2f626136eada9a9d75b7cc1338f8e209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.836674 kubelet[2901]: E1216 12:26:13.836508 2901 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cccce3eb283fa8e4d20dcc919debcd2f626136eada9a9d75b7cc1338f8e209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" Dec 16 12:26:13.836674 kubelet[2901]: E1216 12:26:13.836526 2901 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cccce3eb283fa8e4d20dcc919debcd2f626136eada9a9d75b7cc1338f8e209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" Dec 16 12:26:13.836674 kubelet[2901]: E1216 12:26:13.836602 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65dbdbb8c6-62gh6_calico-apiserver(ea12a60c-4683-4d5e-8e8f-9b466a85a781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65dbdbb8c6-62gh6_calico-apiserver(ea12a60c-4683-4d5e-8e8f-9b466a85a781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9cccce3eb283fa8e4d20dcc919debcd2f626136eada9a9d75b7cc1338f8e209\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:26:13.838807 containerd[1706]: time="2025-12-16T12:26:13.838752014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h72ht,Uid:5b2b3263-cc70-4a4f-a835-4543e7a31ab8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"067d67a48a85a068d269b95fe464b371a8389bb6ae197f06741fad2528301cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.838961 kubelet[2901]: E1216 12:26:13.838904 2901 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"067d67a48a85a068d269b95fe464b371a8389bb6ae197f06741fad2528301cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:13.838961 kubelet[2901]: E1216 12:26:13.838937 2901 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"067d67a48a85a068d269b95fe464b371a8389bb6ae197f06741fad2528301cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h72ht" Dec 16 12:26:13.838961 kubelet[2901]: E1216 12:26:13.838955 2901 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"067d67a48a85a068d269b95fe464b371a8389bb6ae197f06741fad2528301cd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h72ht" Dec 16 12:26:13.839098 kubelet[2901]: E1216 12:26:13.838996 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-h72ht_calico-system(5b2b3263-cc70-4a4f-a835-4543e7a31ab8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-h72ht_calico-system(5b2b3263-cc70-4a4f-a835-4543e7a31ab8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"067d67a48a85a068d269b95fe464b371a8389bb6ae197f06741fad2528301cd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:26:14.162210 containerd[1706]: time="2025-12-16T12:26:14.162162937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:26:14.203603 systemd[1]: run-netns-cni\x2dc1bd8687\x2d36bf\x2d4e8a\x2d7d99\x2d95dcf618d6cc.mount: Deactivated successfully. Dec 16 12:26:14.204047 systemd[1]: run-netns-cni\x2db8a9bf26\x2dc518\x2d5ef5\x2ddb80\x2d35076ea9f659.mount: Deactivated successfully. Dec 16 12:26:18.551940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846736710.mount: Deactivated successfully. Dec 16 12:26:18.579368 containerd[1706]: time="2025-12-16T12:26:18.579038133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:18.580419 containerd[1706]: time="2025-12-16T12:26:18.580361057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:26:18.581907 containerd[1706]: time="2025-12-16T12:26:18.581866622Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:18.585646 containerd[1706]: time="2025-12-16T12:26:18.585566234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:18.586404 containerd[1706]: time="2025-12-16T12:26:18.586378397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.423974019s" Dec 16 12:26:18.586439 containerd[1706]: time="2025-12-16T12:26:18.586414517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:26:18.597988 containerd[1706]: time="2025-12-16T12:26:18.597862954Z" level=info msg="CreateContainer within sandbox \"16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:26:18.609328 containerd[1706]: time="2025-12-16T12:26:18.608857189Z" level=info msg="Container 2e22a98ce39c6acbe6e1bf02862ca97616fc0b571d7e20712bb04ed7deb0a085: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:18.620545 containerd[1706]: time="2025-12-16T12:26:18.620466787Z" level=info msg="CreateContainer within sandbox \"16dfaf2205c740cebde65b20b68e648078e5fe54238a9ae3b53a977d31ce0980\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2e22a98ce39c6acbe6e1bf02862ca97616fc0b571d7e20712bb04ed7deb0a085\"" Dec 16 12:26:18.621614 containerd[1706]: time="2025-12-16T12:26:18.621558070Z" level=info msg="StartContainer for \"2e22a98ce39c6acbe6e1bf02862ca97616fc0b571d7e20712bb04ed7deb0a085\"" Dec 16 12:26:18.623479 containerd[1706]: time="2025-12-16T12:26:18.623447236Z" level=info msg="connecting to shim 2e22a98ce39c6acbe6e1bf02862ca97616fc0b571d7e20712bb04ed7deb0a085" address="unix:///run/containerd/s/ae085f8d0b1617ae6daaed001441064d84dad5f77785dc3b2834c22964da76da" protocol=ttrpc version=3 Dec 16 12:26:18.647566 systemd[1]: Started cri-containerd-2e22a98ce39c6acbe6e1bf02862ca97616fc0b571d7e20712bb04ed7deb0a085.scope - libcontainer container 2e22a98ce39c6acbe6e1bf02862ca97616fc0b571d7e20712bb04ed7deb0a085. Dec 16 12:26:18.712000 audit: BPF prog-id=172 op=LOAD Dec 16 12:26:18.712000 audit[4021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3412 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:18.716395 kernel: audit: type=1334 audit(1765887978.712:576): prog-id=172 op=LOAD Dec 16 12:26:18.716459 kernel: audit: type=1300 audit(1765887978.712:576): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3412 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:18.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323261393863653339633661636265366531626630323836326361 Dec 16 12:26:18.719552 kernel: audit: type=1327 audit(1765887978.712:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323261393863653339633661636265366531626630323836326361 Dec 16 12:26:18.712000 audit: BPF prog-id=173 op=LOAD Dec 16 12:26:18.720358 kernel: audit: type=1334 audit(1765887978.712:577): prog-id=173 op=LOAD Dec 16 12:26:18.720392 kernel: audit: type=1300 audit(1765887978.712:577): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3412 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:18.712000 audit[4021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3412 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:18.723924 kernel: audit: type=1327 audit(1765887978.712:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323261393863653339633661636265366531626630323836326361 Dec 16 12:26:18.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323261393863653339633661636265366531626630323836326361 Dec 16 12:26:18.712000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:26:18.727586 kernel: audit: type=1334 audit(1765887978.712:578): prog-id=173 op=UNLOAD Dec 16 12:26:18.712000 audit[4021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:18.731180 kernel: audit: type=1300 audit(1765887978.712:578): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:18.731252 kernel: audit: type=1327 audit(1765887978.712:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323261393863653339633661636265366531626630323836326361 Dec 16 12:26:18.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323261393863653339633661636265366531626630323836326361 Dec 16 12:26:18.712000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:26:18.735313 kernel: audit: type=1334 audit(1765887978.712:579): prog-id=172 op=UNLOAD Dec 16 12:26:18.712000 audit[4021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3412 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:18.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323261393863653339633661636265366531626630323836326361 Dec 16 12:26:18.712000 audit: BPF prog-id=174 op=LOAD Dec 16 12:26:18.712000 audit[4021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3412 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:18.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265323261393863653339633661636265366531626630323836326361 Dec 16 12:26:18.751987 containerd[1706]: time="2025-12-16T12:26:18.751877370Z" level=info msg="StartContainer for \"2e22a98ce39c6acbe6e1bf02862ca97616fc0b571d7e20712bb04ed7deb0a085\" returns successfully" Dec 16 12:26:18.892363 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:26:18.892481 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:26:19.132667 kubelet[2901]: I1216 12:26:19.132589 2901 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ad325ba6-769c-4666-b200-58acf598b30f-whisker-backend-key-pair\") pod \"ad325ba6-769c-4666-b200-58acf598b30f\" (UID: \"ad325ba6-769c-4666-b200-58acf598b30f\") " Dec 16 12:26:19.132667 kubelet[2901]: I1216 12:26:19.132656 2901 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad325ba6-769c-4666-b200-58acf598b30f-whisker-ca-bundle\") pod \"ad325ba6-769c-4666-b200-58acf598b30f\" (UID: \"ad325ba6-769c-4666-b200-58acf598b30f\") " Dec 16 12:26:19.132667 kubelet[2901]: I1216 12:26:19.132676 2901 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgzkr\" (UniqueName: \"kubernetes.io/projected/ad325ba6-769c-4666-b200-58acf598b30f-kube-api-access-hgzkr\") pod \"ad325ba6-769c-4666-b200-58acf598b30f\" (UID: \"ad325ba6-769c-4666-b200-58acf598b30f\") " Dec 16 12:26:19.133426 kubelet[2901]: I1216 12:26:19.133383 2901 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad325ba6-769c-4666-b200-58acf598b30f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ad325ba6-769c-4666-b200-58acf598b30f" (UID: "ad325ba6-769c-4666-b200-58acf598b30f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:26:19.136392 kubelet[2901]: I1216 12:26:19.136343 2901 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad325ba6-769c-4666-b200-58acf598b30f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ad325ba6-769c-4666-b200-58acf598b30f" (UID: "ad325ba6-769c-4666-b200-58acf598b30f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:26:19.137017 kubelet[2901]: I1216 12:26:19.136779 2901 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad325ba6-769c-4666-b200-58acf598b30f-kube-api-access-hgzkr" (OuterVolumeSpecName: "kube-api-access-hgzkr") pod "ad325ba6-769c-4666-b200-58acf598b30f" (UID: "ad325ba6-769c-4666-b200-58acf598b30f"). InnerVolumeSpecName "kube-api-access-hgzkr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:26:19.182359 systemd[1]: Removed slice kubepods-besteffort-podad325ba6_769c_4666_b200_58acf598b30f.slice - libcontainer container kubepods-besteffort-podad325ba6_769c_4666_b200_58acf598b30f.slice. Dec 16 12:26:19.192511 kubelet[2901]: I1216 12:26:19.192432 2901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5jndz" podStartSLOduration=1.228713578 podStartE2EDuration="16.19241483s" podCreationTimestamp="2025-12-16 12:26:03 +0000 UTC" firstStartedPulling="2025-12-16 12:26:03.623370187 +0000 UTC m=+23.656565340" lastFinishedPulling="2025-12-16 12:26:18.587071439 +0000 UTC m=+38.620266592" observedRunningTime="2025-12-16 12:26:19.191973829 +0000 UTC m=+39.225169022" watchObservedRunningTime="2025-12-16 12:26:19.19241483 +0000 UTC m=+39.225609983" Dec 16 12:26:19.234319 kubelet[2901]: I1216 12:26:19.233898 2901 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ad325ba6-769c-4666-b200-58acf598b30f-whisker-backend-key-pair\") on node \"ci-4515-1-0-7-179ea8c226\" DevicePath \"\"" Dec 16 12:26:19.234319 kubelet[2901]: I1216 12:26:19.233934 2901 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad325ba6-769c-4666-b200-58acf598b30f-whisker-ca-bundle\") on node \"ci-4515-1-0-7-179ea8c226\" DevicePath \"\"" Dec 16 12:26:19.234319 kubelet[2901]: I1216 12:26:19.233947 2901 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgzkr\" (UniqueName: \"kubernetes.io/projected/ad325ba6-769c-4666-b200-58acf598b30f-kube-api-access-hgzkr\") on node \"ci-4515-1-0-7-179ea8c226\" DevicePath \"\"" Dec 16 12:26:19.247284 systemd[1]: Created slice kubepods-besteffort-pod392ff8c8_2fc8_4f21_b0b8_6c2ae06ddf5e.slice - libcontainer container kubepods-besteffort-pod392ff8c8_2fc8_4f21_b0b8_6c2ae06ddf5e.slice. Dec 16 12:26:19.335420 kubelet[2901]: I1216 12:26:19.335358 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e-whisker-ca-bundle\") pod \"whisker-6c89f55df9-rhjdd\" (UID: \"392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e\") " pod="calico-system/whisker-6c89f55df9-rhjdd" Dec 16 12:26:19.335420 kubelet[2901]: I1216 12:26:19.335405 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vkcp\" (UniqueName: \"kubernetes.io/projected/392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e-kube-api-access-9vkcp\") pod \"whisker-6c89f55df9-rhjdd\" (UID: \"392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e\") " pod="calico-system/whisker-6c89f55df9-rhjdd" Dec 16 12:26:19.335420 kubelet[2901]: I1216 12:26:19.335451 2901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e-whisker-backend-key-pair\") pod \"whisker-6c89f55df9-rhjdd\" (UID: \"392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e\") " pod="calico-system/whisker-6c89f55df9-rhjdd" Dec 16 12:26:19.554342 containerd[1706]: time="2025-12-16T12:26:19.554304397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c89f55df9-rhjdd,Uid:392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:19.554821 systemd[1]: var-lib-kubelet-pods-ad325ba6\x2d769c\x2d4666\x2db200\x2d58acf598b30f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhgzkr.mount: Deactivated successfully. Dec 16 12:26:19.555131 systemd[1]: var-lib-kubelet-pods-ad325ba6\x2d769c\x2d4666\x2db200\x2d58acf598b30f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:26:19.687476 systemd-networkd[1602]: cali9da69ae1fc0: Link UP Dec 16 12:26:19.687661 systemd-networkd[1602]: cali9da69ae1fc0: Gained carrier Dec 16 12:26:19.700965 containerd[1706]: 2025-12-16 12:26:19.578 [INFO][4086] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:19.700965 containerd[1706]: 2025-12-16 12:26:19.598 [INFO][4086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0 whisker-6c89f55df9- calico-system 392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e 870 0 2025-12-16 12:26:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c89f55df9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515-1-0-7-179ea8c226 whisker-6c89f55df9-rhjdd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9da69ae1fc0 [] [] }} ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Namespace="calico-system" Pod="whisker-6c89f55df9-rhjdd" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-" Dec 16 12:26:19.700965 containerd[1706]: 2025-12-16 12:26:19.598 [INFO][4086] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Namespace="calico-system" Pod="whisker-6c89f55df9-rhjdd" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" Dec 16 12:26:19.700965 containerd[1706]: 2025-12-16 12:26:19.641 [INFO][4101] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" HandleID="k8s-pod-network.007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Workload="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" Dec 16 12:26:19.701622 containerd[1706]: 2025-12-16 12:26:19.641 [INFO][4101] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" HandleID="k8s-pod-network.007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Workload="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d670), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-7-179ea8c226", "pod":"whisker-6c89f55df9-rhjdd", "timestamp":"2025-12-16 12:26:19.641493718 +0000 UTC"}, Hostname:"ci-4515-1-0-7-179ea8c226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:19.701622 containerd[1706]: 2025-12-16 12:26:19.641 [INFO][4101] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:19.701622 containerd[1706]: 2025-12-16 12:26:19.641 [INFO][4101] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:19.701622 containerd[1706]: 2025-12-16 12:26:19.641 [INFO][4101] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-7-179ea8c226' Dec 16 12:26:19.701622 containerd[1706]: 2025-12-16 12:26:19.652 [INFO][4101] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:19.701622 containerd[1706]: 2025-12-16 12:26:19.658 [INFO][4101] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:19.701622 containerd[1706]: 2025-12-16 12:26:19.662 [INFO][4101] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:19.701622 containerd[1706]: 2025-12-16 12:26:19.665 [INFO][4101] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:19.701622 containerd[1706]: 2025-12-16 12:26:19.667 [INFO][4101] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:19.701933 containerd[1706]: 2025-12-16 12:26:19.667 [INFO][4101] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:19.701933 containerd[1706]: 2025-12-16 12:26:19.669 [INFO][4101] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef Dec 16 12:26:19.701933 containerd[1706]: 2025-12-16 12:26:19.673 [INFO][4101] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:19.701933 containerd[1706]: 2025-12-16 12:26:19.679 [INFO][4101] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.65/26] block=192.168.122.64/26 handle="k8s-pod-network.007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:19.701933 containerd[1706]: 2025-12-16 12:26:19.679 [INFO][4101] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.65/26] handle="k8s-pod-network.007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:19.701933 containerd[1706]: 2025-12-16 12:26:19.679 [INFO][4101] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:19.701933 containerd[1706]: 2025-12-16 12:26:19.679 [INFO][4101] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.65/26] IPv6=[] ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" HandleID="k8s-pod-network.007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Workload="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" Dec 16 12:26:19.702060 containerd[1706]: 2025-12-16 12:26:19.681 [INFO][4086] cni-plugin/k8s.go 418: Populated endpoint ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Namespace="calico-system" Pod="whisker-6c89f55df9-rhjdd" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0", GenerateName:"whisker-6c89f55df9-", Namespace:"calico-system", SelfLink:"", UID:"392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c89f55df9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"", Pod:"whisker-6c89f55df9-rhjdd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9da69ae1fc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:19.702060 containerd[1706]: 2025-12-16 12:26:19.682 [INFO][4086] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.65/32] ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Namespace="calico-system" Pod="whisker-6c89f55df9-rhjdd" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" Dec 16 12:26:19.702129 containerd[1706]: 2025-12-16 12:26:19.682 [INFO][4086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9da69ae1fc0 ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Namespace="calico-system" Pod="whisker-6c89f55df9-rhjdd" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" Dec 16 12:26:19.702129 containerd[1706]: 2025-12-16 12:26:19.687 [INFO][4086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Namespace="calico-system" Pod="whisker-6c89f55df9-rhjdd" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" Dec 16 12:26:19.702168 containerd[1706]: 2025-12-16 12:26:19.688 [INFO][4086] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Namespace="calico-system" Pod="whisker-6c89f55df9-rhjdd" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0", GenerateName:"whisker-6c89f55df9-", Namespace:"calico-system", SelfLink:"", UID:"392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c89f55df9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef", Pod:"whisker-6c89f55df9-rhjdd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9da69ae1fc0", MAC:"1a:44:3e:3f:15:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:19.702214 containerd[1706]: 2025-12-16 12:26:19.696 [INFO][4086] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" Namespace="calico-system" Pod="whisker-6c89f55df9-rhjdd" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-whisker--6c89f55df9--rhjdd-eth0" Dec 16 12:26:19.726864 containerd[1706]: time="2025-12-16T12:26:19.726746512Z" level=info msg="connecting to shim 007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef" address="unix:///run/containerd/s/168bbba135a7bd27ab3f4c647e67b908c960b47b90646a0570994f785d3cf8e3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:19.749500 systemd[1]: Started cri-containerd-007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef.scope - libcontainer container 007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef. Dec 16 12:26:19.759000 audit: BPF prog-id=175 op=LOAD Dec 16 12:26:19.759000 audit: BPF prog-id=176 op=LOAD Dec 16 12:26:19.759000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4126 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030376465623431666435326339656632346631613735623732313931 Dec 16 12:26:19.759000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:26:19.759000 audit[4137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4126 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030376465623431666435326339656632346631613735623732313931 Dec 16 12:26:19.760000 audit: BPF prog-id=177 op=LOAD Dec 16 12:26:19.760000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4126 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030376465623431666435326339656632346631613735623732313931 Dec 16 12:26:19.760000 audit: BPF prog-id=178 op=LOAD Dec 16 12:26:19.760000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4126 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030376465623431666435326339656632346631613735623732313931 Dec 16 12:26:19.760000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:26:19.760000 audit[4137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4126 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030376465623431666435326339656632346631613735623732313931 Dec 16 12:26:19.760000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:26:19.760000 audit[4137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4126 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030376465623431666435326339656632346631613735623732313931 Dec 16 12:26:19.761000 audit: BPF prog-id=179 op=LOAD Dec 16 12:26:19.761000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4126 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030376465623431666435326339656632346631613735623732313931 Dec 16 12:26:19.782434 containerd[1706]: time="2025-12-16T12:26:19.782391172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c89f55df9-rhjdd,Uid:392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"007deb41fd52c9ef24f1a75b7219167d8b63463a392d440a82d29e486fb1edef\"" Dec 16 12:26:19.784613 containerd[1706]: time="2025-12-16T12:26:19.784572899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:26:20.059221 kubelet[2901]: I1216 12:26:20.059167 2901 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad325ba6-769c-4666-b200-58acf598b30f" path="/var/lib/kubelet/pods/ad325ba6-769c-4666-b200-58acf598b30f/volumes" Dec 16 12:26:20.131639 containerd[1706]: time="2025-12-16T12:26:20.131527817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:20.133186 containerd[1706]: time="2025-12-16T12:26:20.133116542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:26:20.133265 containerd[1706]: time="2025-12-16T12:26:20.133212782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:20.133488 kubelet[2901]: E1216 12:26:20.133428 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:26:20.133488 kubelet[2901]: E1216 12:26:20.133479 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:26:20.133826 kubelet[2901]: E1216 12:26:20.133559 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:20.135662 containerd[1706]: time="2025-12-16T12:26:20.135621110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:26:20.460494 containerd[1706]: time="2025-12-16T12:26:20.460367357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:20.461951 containerd[1706]: time="2025-12-16T12:26:20.461879762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:26:20.462053 containerd[1706]: time="2025-12-16T12:26:20.461982162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:20.462214 kubelet[2901]: E1216 12:26:20.462175 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:26:20.462256 kubelet[2901]: E1216 12:26:20.462223 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:26:20.462343 kubelet[2901]: E1216 12:26:20.462325 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:20.462393 kubelet[2901]: E1216 12:26:20.462367 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:26:21.187510 kubelet[2901]: E1216 12:26:21.187417 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:26:21.219000 audit[4270]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:21.219000 audit[4270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe27f0490 a2=0 a3=1 items=0 ppid=3030 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:21.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:21.228000 audit[4270]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:21.228000 audit[4270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe27f0490 a2=0 a3=1 items=0 ppid=3030 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:21.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:21.374568 systemd-networkd[1602]: cali9da69ae1fc0: Gained IPv6LL Dec 16 12:26:25.060755 containerd[1706]: time="2025-12-16T12:26:25.060659904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9ck8v,Uid:aaeeb48b-578f-4e35-a67a-4bb9f3df97da,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:25.063743 containerd[1706]: time="2025-12-16T12:26:25.063710554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8fpw,Uid:1da2d440-02fc-4a40-abf1-80ffcd9275c1,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:25.172072 systemd-networkd[1602]: cali92323502689: Link UP Dec 16 12:26:25.172547 systemd-networkd[1602]: cali92323502689: Gained carrier Dec 16 12:26:25.184680 containerd[1706]: 2025-12-16 12:26:25.090 [INFO][4364] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:25.184680 containerd[1706]: 2025-12-16 12:26:25.105 [INFO][4364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0 coredns-66bc5c9577- kube-system aaeeb48b-578f-4e35-a67a-4bb9f3df97da 806 0 2025-12-16 12:25:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-7-179ea8c226 coredns-66bc5c9577-9ck8v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali92323502689 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Namespace="kube-system" Pod="coredns-66bc5c9577-9ck8v" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-" Dec 16 12:26:25.184680 containerd[1706]: 2025-12-16 12:26:25.105 [INFO][4364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Namespace="kube-system" Pod="coredns-66bc5c9577-9ck8v" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" Dec 16 12:26:25.184680 containerd[1706]: 2025-12-16 12:26:25.129 [INFO][4391] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" HandleID="k8s-pod-network.87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Workload="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" Dec 16 12:26:25.184909 containerd[1706]: 2025-12-16 12:26:25.129 [INFO][4391] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" HandleID="k8s-pod-network.87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Workload="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-7-179ea8c226", "pod":"coredns-66bc5c9577-9ck8v", "timestamp":"2025-12-16 12:26:25.129414446 +0000 UTC"}, Hostname:"ci-4515-1-0-7-179ea8c226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:25.184909 containerd[1706]: 2025-12-16 12:26:25.129 [INFO][4391] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:25.184909 containerd[1706]: 2025-12-16 12:26:25.129 [INFO][4391] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:25.184909 containerd[1706]: 2025-12-16 12:26:25.129 [INFO][4391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-7-179ea8c226' Dec 16 12:26:25.184909 containerd[1706]: 2025-12-16 12:26:25.139 [INFO][4391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.184909 containerd[1706]: 2025-12-16 12:26:25.144 [INFO][4391] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.184909 containerd[1706]: 2025-12-16 12:26:25.149 [INFO][4391] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.184909 containerd[1706]: 2025-12-16 12:26:25.151 [INFO][4391] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.184909 containerd[1706]: 2025-12-16 12:26:25.153 [INFO][4391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.185086 containerd[1706]: 2025-12-16 12:26:25.153 [INFO][4391] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.185086 containerd[1706]: 2025-12-16 12:26:25.155 [INFO][4391] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6 Dec 16 12:26:25.185086 containerd[1706]: 2025-12-16 12:26:25.160 [INFO][4391] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.185086 containerd[1706]: 2025-12-16 12:26:25.165 [INFO][4391] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.66/26] block=192.168.122.64/26 handle="k8s-pod-network.87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.185086 containerd[1706]: 2025-12-16 12:26:25.165 [INFO][4391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.66/26] handle="k8s-pod-network.87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.185086 containerd[1706]: 2025-12-16 12:26:25.165 [INFO][4391] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:25.185086 containerd[1706]: 2025-12-16 12:26:25.165 [INFO][4391] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.66/26] IPv6=[] ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" HandleID="k8s-pod-network.87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Workload="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" Dec 16 12:26:25.185221 containerd[1706]: 2025-12-16 12:26:25.168 [INFO][4364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Namespace="kube-system" Pod="coredns-66bc5c9577-9ck8v" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"aaeeb48b-578f-4e35-a67a-4bb9f3df97da", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"", Pod:"coredns-66bc5c9577-9ck8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92323502689", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:25.185221 containerd[1706]: 2025-12-16 12:26:25.169 [INFO][4364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.66/32] ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Namespace="kube-system" Pod="coredns-66bc5c9577-9ck8v" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" Dec 16 12:26:25.185221 containerd[1706]: 2025-12-16 12:26:25.169 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92323502689 ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Namespace="kube-system" Pod="coredns-66bc5c9577-9ck8v" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" Dec 16 12:26:25.185221 containerd[1706]: 2025-12-16 12:26:25.173 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Namespace="kube-system" Pod="coredns-66bc5c9577-9ck8v" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" Dec 16 12:26:25.185221 containerd[1706]: 2025-12-16 12:26:25.173 [INFO][4364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Namespace="kube-system" Pod="coredns-66bc5c9577-9ck8v" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"aaeeb48b-578f-4e35-a67a-4bb9f3df97da", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6", Pod:"coredns-66bc5c9577-9ck8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92323502689", MAC:"6e:2e:31:bc:c4:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:25.185528 containerd[1706]: 2025-12-16 12:26:25.183 [INFO][4364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" Namespace="kube-system" Pod="coredns-66bc5c9577-9ck8v" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--9ck8v-eth0" Dec 16 12:26:25.207494 containerd[1706]: time="2025-12-16T12:26:25.207452178Z" level=info msg="connecting to shim 87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6" address="unix:///run/containerd/s/c9e19d0b9b546d8a7b78fba9d00a975c1764481a271e790c4dbe473630657629" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:25.234737 systemd[1]: Started cri-containerd-87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6.scope - libcontainer container 87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6. Dec 16 12:26:25.251057 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 16 12:26:25.251219 kernel: audit: type=1334 audit(1765887985.249:591): prog-id=180 op=LOAD Dec 16 12:26:25.249000 audit: BPF prog-id=180 op=LOAD Dec 16 12:26:25.251000 audit: BPF prog-id=181 op=LOAD Dec 16 12:26:25.252891 kernel: audit: type=1334 audit(1765887985.251:592): prog-id=181 op=LOAD Dec 16 12:26:25.251000 audit[4438]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.256598 kernel: audit: type=1300 audit(1765887985.251:592): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.260150 kernel: audit: type=1327 audit(1765887985.251:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.251000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:26:25.261326 kernel: audit: type=1334 audit(1765887985.251:593): prog-id=181 op=UNLOAD Dec 16 12:26:25.251000 audit[4438]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.265391 kernel: audit: type=1300 audit(1765887985.251:593): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.268777 kernel: audit: type=1327 audit(1765887985.251:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.251000 audit: BPF prog-id=182 op=LOAD Dec 16 12:26:25.269803 kernel: audit: type=1334 audit(1765887985.251:594): prog-id=182 op=LOAD Dec 16 12:26:25.251000 audit[4438]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.273750 kernel: audit: type=1300 audit(1765887985.251:594): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.277122 kernel: audit: type=1327 audit(1765887985.251:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.251000 audit: BPF prog-id=183 op=LOAD Dec 16 12:26:25.251000 audit[4438]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.251000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:26:25.251000 audit[4438]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.251000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:26:25.251000 audit[4438]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.251000 audit: BPF prog-id=184 op=LOAD Dec 16 12:26:25.251000 audit[4438]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4427 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837626634366334633463636433613030373464303566333533353264 Dec 16 12:26:25.288203 systemd-networkd[1602]: cali09d530ffc04: Link UP Dec 16 12:26:25.289068 systemd-networkd[1602]: cali09d530ffc04: Gained carrier Dec 16 12:26:25.292999 containerd[1706]: time="2025-12-16T12:26:25.292960973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9ck8v,Uid:aaeeb48b-578f-4e35-a67a-4bb9f3df97da,Namespace:kube-system,Attempt:0,} returns sandbox id \"87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6\"" Dec 16 12:26:25.304347 containerd[1706]: time="2025-12-16T12:26:25.304115529Z" level=info msg="CreateContainer within sandbox \"87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.091 [INFO][4370] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.109 [INFO][4370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0 csi-node-driver- calico-system 1da2d440-02fc-4a40-abf1-80ffcd9275c1 710 0 2025-12-16 12:26:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515-1-0-7-179ea8c226 csi-node-driver-k8fpw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali09d530ffc04 [] [] }} ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Namespace="calico-system" Pod="csi-node-driver-k8fpw" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.109 [INFO][4370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Namespace="calico-system" Pod="csi-node-driver-k8fpw" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.133 [INFO][4397] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" HandleID="k8s-pod-network.5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Workload="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.133 [INFO][4397] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" HandleID="k8s-pod-network.5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Workload="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-7-179ea8c226", "pod":"csi-node-driver-k8fpw", "timestamp":"2025-12-16 12:26:25.13361818 +0000 UTC"}, Hostname:"ci-4515-1-0-7-179ea8c226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.133 [INFO][4397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.165 [INFO][4397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.165 [INFO][4397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-7-179ea8c226' Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.241 [INFO][4397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.247 [INFO][4397] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.260 [INFO][4397] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.265 [INFO][4397] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.268 [INFO][4397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.268 [INFO][4397] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.270 [INFO][4397] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231 Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.276 [INFO][4397] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.282 [INFO][4397] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.67/26] block=192.168.122.64/26 handle="k8s-pod-network.5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.282 [INFO][4397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.67/26] handle="k8s-pod-network.5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.282 [INFO][4397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:25.306967 containerd[1706]: 2025-12-16 12:26:25.283 [INFO][4397] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.67/26] IPv6=[] ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" HandleID="k8s-pod-network.5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Workload="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" Dec 16 12:26:25.307491 containerd[1706]: 2025-12-16 12:26:25.286 [INFO][4370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Namespace="calico-system" Pod="csi-node-driver-k8fpw" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1da2d440-02fc-4a40-abf1-80ffcd9275c1", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"", Pod:"csi-node-driver-k8fpw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09d530ffc04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:25.307491 containerd[1706]: 2025-12-16 12:26:25.286 [INFO][4370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.67/32] ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Namespace="calico-system" Pod="csi-node-driver-k8fpw" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" Dec 16 12:26:25.307491 containerd[1706]: 2025-12-16 12:26:25.286 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09d530ffc04 ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Namespace="calico-system" Pod="csi-node-driver-k8fpw" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" Dec 16 12:26:25.307491 containerd[1706]: 2025-12-16 12:26:25.288 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Namespace="calico-system" Pod="csi-node-driver-k8fpw" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" Dec 16 12:26:25.307491 containerd[1706]: 2025-12-16 12:26:25.289 [INFO][4370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Namespace="calico-system" Pod="csi-node-driver-k8fpw" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1da2d440-02fc-4a40-abf1-80ffcd9275c1", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231", Pod:"csi-node-driver-k8fpw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09d530ffc04", MAC:"06:df:07:af:2a:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:25.307491 containerd[1706]: 2025-12-16 12:26:25.303 [INFO][4370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" Namespace="calico-system" Pod="csi-node-driver-k8fpw" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-csi--node--driver--k8fpw-eth0" Dec 16 12:26:25.322883 containerd[1706]: time="2025-12-16T12:26:25.322668589Z" level=info msg="Container 9cffcff6991c51ef70698b7978bdecf3f977a6364ae10b3338f22227c36a1030: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:25.341337 containerd[1706]: time="2025-12-16T12:26:25.341256329Z" level=info msg="connecting to shim 5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231" address="unix:///run/containerd/s/501b3383182620102877fe68e9f08cf8ee6dae1a79b080a8f5336ae40a254de1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:25.343728 containerd[1706]: time="2025-12-16T12:26:25.343683817Z" level=info msg="CreateContainer within sandbox \"87bf46c4c4ccd3a0074d05f35352d2d3131453819aaf76003511bcdc85c9beb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9cffcff6991c51ef70698b7978bdecf3f977a6364ae10b3338f22227c36a1030\"" Dec 16 12:26:25.344953 containerd[1706]: time="2025-12-16T12:26:25.344924381Z" level=info msg="StartContainer for \"9cffcff6991c51ef70698b7978bdecf3f977a6364ae10b3338f22227c36a1030\"" Dec 16 12:26:25.345771 containerd[1706]: time="2025-12-16T12:26:25.345743223Z" level=info msg="connecting to shim 9cffcff6991c51ef70698b7978bdecf3f977a6364ae10b3338f22227c36a1030" address="unix:///run/containerd/s/c9e19d0b9b546d8a7b78fba9d00a975c1764481a271e790c4dbe473630657629" protocol=ttrpc version=3 Dec 16 12:26:25.382559 systemd[1]: Started cri-containerd-5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231.scope - libcontainer container 5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231. Dec 16 12:26:25.383610 systemd[1]: Started cri-containerd-9cffcff6991c51ef70698b7978bdecf3f977a6364ae10b3338f22227c36a1030.scope - libcontainer container 9cffcff6991c51ef70698b7978bdecf3f977a6364ae10b3338f22227c36a1030. Dec 16 12:26:25.393000 audit: BPF prog-id=185 op=LOAD Dec 16 12:26:25.394000 audit: BPF prog-id=186 op=LOAD Dec 16 12:26:25.394000 audit[4490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562356564633434326332313632643539633435303236633039323738 Dec 16 12:26:25.394000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:26:25.394000 audit[4490]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562356564633434326332313632643539633435303236633039323738 Dec 16 12:26:25.394000 audit: BPF prog-id=187 op=LOAD Dec 16 12:26:25.394000 audit[4490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562356564633434326332313632643539633435303236633039323738 Dec 16 12:26:25.394000 audit: BPF prog-id=188 op=LOAD Dec 16 12:26:25.394000 audit[4490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562356564633434326332313632643539633435303236633039323738 Dec 16 12:26:25.394000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:26:25.394000 audit[4490]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562356564633434326332313632643539633435303236633039323738 Dec 16 12:26:25.394000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:26:25.394000 audit[4490]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562356564633434326332313632643539633435303236633039323738 Dec 16 12:26:25.394000 audit: BPF prog-id=189 op=LOAD Dec 16 12:26:25.394000 audit[4490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4478 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562356564633434326332313632643539633435303236633039323738 Dec 16 12:26:25.396000 audit: BPF prog-id=190 op=LOAD Dec 16 12:26:25.397000 audit: BPF prog-id=191 op=LOAD Dec 16 12:26:25.397000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4427 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963666663666636393931633531656637303639386237393738626465 Dec 16 12:26:25.397000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:26:25.397000 audit[4488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963666663666636393931633531656637303639386237393738626465 Dec 16 12:26:25.397000 audit: BPF prog-id=192 op=LOAD Dec 16 12:26:25.397000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4427 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963666663666636393931633531656637303639386237393738626465 Dec 16 12:26:25.398000 audit: BPF prog-id=193 op=LOAD Dec 16 12:26:25.398000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4427 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963666663666636393931633531656637303639386237393738626465 Dec 16 12:26:25.398000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:26:25.398000 audit[4488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963666663666636393931633531656637303639386237393738626465 Dec 16 12:26:25.398000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:26:25.398000 audit[4488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963666663666636393931633531656637303639386237393738626465 Dec 16 12:26:25.398000 audit: BPF prog-id=194 op=LOAD Dec 16 12:26:25.398000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4427 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963666663666636393931633531656637303639386237393738626465 Dec 16 12:26:25.422751 containerd[1706]: time="2025-12-16T12:26:25.422707551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8fpw,Uid:1da2d440-02fc-4a40-abf1-80ffcd9275c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b5edc442c2162d59c45026c092781f6137b262c2faffc03d04fc7647de2e231\"" Dec 16 12:26:25.425464 containerd[1706]: time="2025-12-16T12:26:25.425420960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:26:25.432746 containerd[1706]: time="2025-12-16T12:26:25.432710104Z" level=info msg="StartContainer for \"9cffcff6991c51ef70698b7978bdecf3f977a6364ae10b3338f22227c36a1030\" returns successfully" Dec 16 12:26:25.785756 containerd[1706]: time="2025-12-16T12:26:25.785712721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:25.789728 containerd[1706]: time="2025-12-16T12:26:25.789646894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:26:25.789728 containerd[1706]: time="2025-12-16T12:26:25.789681894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:25.790035 kubelet[2901]: E1216 12:26:25.789922 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:26:25.790035 kubelet[2901]: E1216 12:26:25.790006 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:26:25.790427 kubelet[2901]: E1216 12:26:25.790099 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:25.791276 containerd[1706]: time="2025-12-16T12:26:25.791035819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:26:26.139099 containerd[1706]: time="2025-12-16T12:26:26.138959380Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:26.140692 containerd[1706]: time="2025-12-16T12:26:26.140650825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:26:26.140763 containerd[1706]: time="2025-12-16T12:26:26.140700226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:26.140974 kubelet[2901]: E1216 12:26:26.140913 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:26:26.141098 kubelet[2901]: E1216 12:26:26.140978 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:26:26.141098 kubelet[2901]: E1216 12:26:26.141075 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:26.141167 kubelet[2901]: E1216 12:26:26.141133 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:26.197760 kubelet[2901]: E1216 12:26:26.197703 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:26.223155 kubelet[2901]: I1216 12:26:26.223007 2901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9ck8v" podStartSLOduration=39.222989731 podStartE2EDuration="39.222989731s" podCreationTimestamp="2025-12-16 12:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:26.222533369 +0000 UTC m=+46.255728522" watchObservedRunningTime="2025-12-16 12:26:26.222989731 +0000 UTC m=+46.256184924" Dec 16 12:26:26.233000 audit[4574]: NETFILTER_CFG table=filter:119 family=2 entries=19 op=nft_register_rule pid=4574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:26.233000 audit[4574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd3fa48f0 a2=0 a3=1 items=0 ppid=3030 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:26.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:26.238000 audit[4574]: NETFILTER_CFG table=nat:120 family=2 entries=33 op=nft_register_chain pid=4574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:26.238000 audit[4574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffd3fa48f0 a2=0 a3=1 items=0 ppid=3030 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:26.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:26.365515 systemd-networkd[1602]: cali92323502689: Gained IPv6LL Dec 16 12:26:27.059512 containerd[1706]: time="2025-12-16T12:26:27.059418147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f6dbc5-vnxvc,Uid:f8c50491-6041-443b-aedf-13a9fee1a718,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:27.134430 systemd-networkd[1602]: cali09d530ffc04: Gained IPv6LL Dec 16 12:26:27.173311 systemd-networkd[1602]: calib2854ab45d1: Link UP Dec 16 12:26:27.173504 systemd-networkd[1602]: calib2854ab45d1: Gained carrier Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.080 [INFO][4599] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.094 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0 calico-kube-controllers-758f6dbc5- calico-system f8c50491-6041-443b-aedf-13a9fee1a718 807 0 2025-12-16 12:26:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:758f6dbc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515-1-0-7-179ea8c226 calico-kube-controllers-758f6dbc5-vnxvc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib2854ab45d1 [] [] }} ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Namespace="calico-system" Pod="calico-kube-controllers-758f6dbc5-vnxvc" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.094 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Namespace="calico-system" Pod="calico-kube-controllers-758f6dbc5-vnxvc" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.118 [INFO][4613] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" HandleID="k8s-pod-network.b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Workload="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.119 [INFO][4613] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" HandleID="k8s-pod-network.b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Workload="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-7-179ea8c226", "pod":"calico-kube-controllers-758f6dbc5-vnxvc", "timestamp":"2025-12-16 12:26:27.118661258 +0000 UTC"}, Hostname:"ci-4515-1-0-7-179ea8c226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.119 [INFO][4613] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.119 [INFO][4613] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.119 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-7-179ea8c226' Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.130 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.143 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.148 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.150 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.153 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.153 [INFO][4613] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.155 [INFO][4613] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.160 [INFO][4613] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.166 [INFO][4613] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.68/26] block=192.168.122.64/26 handle="k8s-pod-network.b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.166 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.68/26] handle="k8s-pod-network.b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.166 [INFO][4613] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:27.189301 containerd[1706]: 2025-12-16 12:26:27.166 [INFO][4613] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.68/26] IPv6=[] ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" HandleID="k8s-pod-network.b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Workload="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" Dec 16 12:26:27.190050 containerd[1706]: 2025-12-16 12:26:27.168 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Namespace="calico-system" Pod="calico-kube-controllers-758f6dbc5-vnxvc" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0", GenerateName:"calico-kube-controllers-758f6dbc5-", Namespace:"calico-system", SelfLink:"", UID:"f8c50491-6041-443b-aedf-13a9fee1a718", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"758f6dbc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"", Pod:"calico-kube-controllers-758f6dbc5-vnxvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2854ab45d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:27.190050 containerd[1706]: 2025-12-16 12:26:27.169 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.68/32] ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Namespace="calico-system" Pod="calico-kube-controllers-758f6dbc5-vnxvc" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" Dec 16 12:26:27.190050 containerd[1706]: 2025-12-16 12:26:27.169 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2854ab45d1 ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Namespace="calico-system" Pod="calico-kube-controllers-758f6dbc5-vnxvc" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" Dec 16 12:26:27.190050 containerd[1706]: 2025-12-16 12:26:27.173 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Namespace="calico-system" Pod="calico-kube-controllers-758f6dbc5-vnxvc" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" Dec 16 12:26:27.190050 containerd[1706]: 2025-12-16 12:26:27.176 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Namespace="calico-system" Pod="calico-kube-controllers-758f6dbc5-vnxvc" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0", GenerateName:"calico-kube-controllers-758f6dbc5-", Namespace:"calico-system", SelfLink:"", UID:"f8c50491-6041-443b-aedf-13a9fee1a718", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"758f6dbc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb", Pod:"calico-kube-controllers-758f6dbc5-vnxvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2854ab45d1", MAC:"26:b3:ee:04:26:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:27.190050 containerd[1706]: 2025-12-16 12:26:27.186 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" Namespace="calico-system" Pod="calico-kube-controllers-758f6dbc5-vnxvc" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--kube--controllers--758f6dbc5--vnxvc-eth0" Dec 16 12:26:27.201913 kubelet[2901]: E1216 12:26:27.201851 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:27.234697 containerd[1706]: time="2025-12-16T12:26:27.234099710Z" level=info msg="connecting to shim b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb" address="unix:///run/containerd/s/3590c254411235a4b4fcb2bee8e19920bd91021e8f630baa6a2dc216793f4552" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:27.270725 systemd[1]: Started cri-containerd-b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb.scope - libcontainer container b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb. Dec 16 12:26:27.281000 audit: BPF prog-id=195 op=LOAD Dec 16 12:26:27.281000 audit: BPF prog-id=196 op=LOAD Dec 16 12:26:27.281000 audit[4648]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4637 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:27.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313762316566316262343030373834633464333962643030663861 Dec 16 12:26:27.281000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:26:27.281000 audit[4648]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4637 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:27.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313762316566316262343030373834633464333962643030663861 Dec 16 12:26:27.281000 audit: BPF prog-id=197 op=LOAD Dec 16 12:26:27.281000 audit[4648]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4637 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:27.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313762316566316262343030373834633464333962643030663861 Dec 16 12:26:27.282000 audit: BPF prog-id=198 op=LOAD Dec 16 12:26:27.282000 audit[4648]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4637 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:27.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313762316566316262343030373834633464333962643030663861 Dec 16 12:26:27.282000 audit: BPF prog-id=198 op=UNLOAD Dec 16 12:26:27.282000 audit[4648]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4637 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:27.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313762316566316262343030373834633464333962643030663861 Dec 16 12:26:27.282000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:26:27.282000 audit[4648]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4637 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:27.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313762316566316262343030373834633464333962643030663861 Dec 16 12:26:27.282000 audit: BPF prog-id=199 op=LOAD Dec 16 12:26:27.282000 audit[4648]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4637 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:27.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313762316566316262343030373834633464333962643030663861 Dec 16 12:26:27.303669 containerd[1706]: time="2025-12-16T12:26:27.303595894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f6dbc5-vnxvc,Uid:f8c50491-6041-443b-aedf-13a9fee1a718,Namespace:calico-system,Attempt:0,} returns sandbox id \"b717b1ef1bb400784c4d39bd00f8a8db9cfd7cdcf4e8e3eda3a947da70bf47cb\"" Dec 16 12:26:27.305261 containerd[1706]: time="2025-12-16T12:26:27.305237539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:26:27.639939 containerd[1706]: time="2025-12-16T12:26:27.639873378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:27.644778 containerd[1706]: time="2025-12-16T12:26:27.644686513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:26:27.644913 containerd[1706]: time="2025-12-16T12:26:27.644777433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:27.645069 kubelet[2901]: E1216 12:26:27.645027 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:26:27.645117 kubelet[2901]: E1216 12:26:27.645073 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:26:27.645164 kubelet[2901]: E1216 12:26:27.645139 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-758f6dbc5-vnxvc_calico-system(f8c50491-6041-443b-aedf-13a9fee1a718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:27.645210 kubelet[2901]: E1216 12:26:27.645178 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:26:28.060127 containerd[1706]: time="2025-12-16T12:26:28.060061132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dbdbb8c6-9lrp5,Uid:06792be6-fad3-4b79-a250-73afb10c06a6,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:28.062313 containerd[1706]: time="2025-12-16T12:26:28.062262859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h72ht,Uid:5b2b3263-cc70-4a4f-a835-4543e7a31ab8,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:28.177506 systemd-networkd[1602]: calif0473e8db08: Link UP Dec 16 12:26:28.177643 systemd-networkd[1602]: calif0473e8db08: Gained carrier Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.089 [INFO][4697] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.109 [INFO][4697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0 calico-apiserver-65dbdbb8c6- calico-apiserver 06792be6-fad3-4b79-a250-73afb10c06a6 804 0 2025-12-16 12:25:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65dbdbb8c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-7-179ea8c226 calico-apiserver-65dbdbb8c6-9lrp5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0473e8db08 [] [] }} ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-9lrp5" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.109 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-9lrp5" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.133 [INFO][4725] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" HandleID="k8s-pod-network.6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Workload="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.133 [INFO][4725] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" HandleID="k8s-pod-network.6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Workload="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000585920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-7-179ea8c226", "pod":"calico-apiserver-65dbdbb8c6-9lrp5", "timestamp":"2025-12-16 12:26:28.13381377 +0000 UTC"}, Hostname:"ci-4515-1-0-7-179ea8c226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.134 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.134 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.134 [INFO][4725] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-7-179ea8c226' Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.146 [INFO][4725] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.151 [INFO][4725] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.155 [INFO][4725] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.157 [INFO][4725] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.159 [INFO][4725] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.160 [INFO][4725] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.161 [INFO][4725] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.166 [INFO][4725] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.172 [INFO][4725] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.69/26] block=192.168.122.64/26 handle="k8s-pod-network.6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.172 [INFO][4725] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.69/26] handle="k8s-pod-network.6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.172 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:28.191530 containerd[1706]: 2025-12-16 12:26:28.172 [INFO][4725] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.69/26] IPv6=[] ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" HandleID="k8s-pod-network.6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Workload="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" Dec 16 12:26:28.192229 containerd[1706]: 2025-12-16 12:26:28.174 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-9lrp5" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0", GenerateName:"calico-apiserver-65dbdbb8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"06792be6-fad3-4b79-a250-73afb10c06a6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dbdbb8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"", Pod:"calico-apiserver-65dbdbb8c6-9lrp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0473e8db08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:28.192229 containerd[1706]: 2025-12-16 12:26:28.174 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.69/32] ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-9lrp5" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" Dec 16 12:26:28.192229 containerd[1706]: 2025-12-16 12:26:28.174 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0473e8db08 ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-9lrp5" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" Dec 16 12:26:28.192229 containerd[1706]: 2025-12-16 12:26:28.175 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-9lrp5" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" Dec 16 12:26:28.192229 containerd[1706]: 2025-12-16 12:26:28.176 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-9lrp5" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0", GenerateName:"calico-apiserver-65dbdbb8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"06792be6-fad3-4b79-a250-73afb10c06a6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dbdbb8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a", Pod:"calico-apiserver-65dbdbb8c6-9lrp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0473e8db08", MAC:"8e:05:63:93:77:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:28.192229 containerd[1706]: 2025-12-16 12:26:28.190 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-9lrp5" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--9lrp5-eth0" Dec 16 12:26:28.204686 kubelet[2901]: E1216 12:26:28.204611 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:26:28.228777 containerd[1706]: time="2025-12-16T12:26:28.228684475Z" level=info msg="connecting to shim 6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a" address="unix:///run/containerd/s/0182e30165d317fbfcb1625baccd786955e13888df60dd9a1b04f9365d4d5dda" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:28.250537 systemd[1]: Started cri-containerd-6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a.scope - libcontainer container 6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a. Dec 16 12:26:28.267000 audit: BPF prog-id=200 op=LOAD Dec 16 12:26:28.268000 audit: BPF prog-id=201 op=LOAD Dec 16 12:26:28.268000 audit[4770]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4757 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661326331313332633765396331666435393837393233623735323734 Dec 16 12:26:28.269000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:26:28.269000 audit[4770]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661326331313332633765396331666435393837393233623735323734 Dec 16 12:26:28.270000 audit: BPF prog-id=202 op=LOAD Dec 16 12:26:28.270000 audit[4770]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4757 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661326331313332633765396331666435393837393233623735323734 Dec 16 12:26:28.270000 audit: BPF prog-id=203 op=LOAD Dec 16 12:26:28.270000 audit[4770]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4757 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661326331313332633765396331666435393837393233623735323734 Dec 16 12:26:28.270000 audit: BPF prog-id=203 op=UNLOAD Dec 16 12:26:28.270000 audit[4770]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661326331313332633765396331666435393837393233623735323734 Dec 16 12:26:28.270000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:26:28.270000 audit[4770]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661326331313332633765396331666435393837393233623735323734 Dec 16 12:26:28.270000 audit: BPF prog-id=204 op=LOAD Dec 16 12:26:28.270000 audit[4770]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4757 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661326331313332633765396331666435393837393233623735323734 Dec 16 12:26:28.285239 systemd-networkd[1602]: cali8398fce498d: Link UP Dec 16 12:26:28.285630 systemd-networkd[1602]: cali8398fce498d: Gained carrier Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.098 [INFO][4708] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.113 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0 goldmane-7c778bb748- calico-system 5b2b3263-cc70-4a4f-a835-4543e7a31ab8 809 0 2025-12-16 12:26:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515-1-0-7-179ea8c226 goldmane-7c778bb748-h72ht eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8398fce498d [] [] }} ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Namespace="calico-system" Pod="goldmane-7c778bb748-h72ht" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.113 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Namespace="calico-system" Pod="goldmane-7c778bb748-h72ht" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.143 [INFO][4731] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" HandleID="k8s-pod-network.6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Workload="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.143 [INFO][4731] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" HandleID="k8s-pod-network.6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Workload="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-7-179ea8c226", "pod":"goldmane-7c778bb748-h72ht", "timestamp":"2025-12-16 12:26:28.14319304 +0000 UTC"}, Hostname:"ci-4515-1-0-7-179ea8c226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.143 [INFO][4731] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.172 [INFO][4731] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.172 [INFO][4731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-7-179ea8c226' Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.246 [INFO][4731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.252 [INFO][4731] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.258 [INFO][4731] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.262 [INFO][4731] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.265 [INFO][4731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.265 [INFO][4731] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.268 [INFO][4731] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51 Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.272 [INFO][4731] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.279 [INFO][4731] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.70/26] block=192.168.122.64/26 handle="k8s-pod-network.6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.279 [INFO][4731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.70/26] handle="k8s-pod-network.6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.279 [INFO][4731] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:28.307904 containerd[1706]: 2025-12-16 12:26:28.279 [INFO][4731] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.70/26] IPv6=[] ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" HandleID="k8s-pod-network.6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Workload="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" Dec 16 12:26:28.308581 containerd[1706]: 2025-12-16 12:26:28.281 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Namespace="calico-system" Pod="goldmane-7c778bb748-h72ht" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"5b2b3263-cc70-4a4f-a835-4543e7a31ab8", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"", Pod:"goldmane-7c778bb748-h72ht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8398fce498d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:28.308581 containerd[1706]: 2025-12-16 12:26:28.282 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.70/32] ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Namespace="calico-system" Pod="goldmane-7c778bb748-h72ht" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" Dec 16 12:26:28.308581 containerd[1706]: 2025-12-16 12:26:28.282 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8398fce498d ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Namespace="calico-system" Pod="goldmane-7c778bb748-h72ht" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" Dec 16 12:26:28.308581 containerd[1706]: 2025-12-16 12:26:28.285 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Namespace="calico-system" Pod="goldmane-7c778bb748-h72ht" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" Dec 16 12:26:28.308581 containerd[1706]: 2025-12-16 12:26:28.286 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Namespace="calico-system" Pod="goldmane-7c778bb748-h72ht" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"5b2b3263-cc70-4a4f-a835-4543e7a31ab8", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51", Pod:"goldmane-7c778bb748-h72ht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8398fce498d", MAC:"a2:8b:be:c5:d5:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:28.308581 containerd[1706]: 2025-12-16 12:26:28.300 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" Namespace="calico-system" Pod="goldmane-7c778bb748-h72ht" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-goldmane--7c778bb748--h72ht-eth0" Dec 16 12:26:28.318676 containerd[1706]: time="2025-12-16T12:26:28.317502842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dbdbb8c6-9lrp5,Uid:06792be6-fad3-4b79-a250-73afb10c06a6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6a2c1132c7e9c1fd5987923b752744d6bc4e7279c0d474701e01f518187c210a\"" Dec 16 12:26:28.319943 containerd[1706]: time="2025-12-16T12:26:28.319910649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:26:28.346535 containerd[1706]: time="2025-12-16T12:26:28.346489455Z" level=info msg="connecting to shim 6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51" address="unix:///run/containerd/s/b6af7515f4b336be3cadbad0ca2a6fda6ee4456f786b63eed28bcaf417b27a76" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:28.376507 systemd[1]: Started cri-containerd-6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51.scope - libcontainer container 6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51. Dec 16 12:26:28.396000 audit: BPF prog-id=205 op=LOAD Dec 16 12:26:28.396000 audit: BPF prog-id=206 op=LOAD Dec 16 12:26:28.396000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4810 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343363363432646339363364366466383536613866633934363430 Dec 16 12:26:28.396000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:26:28.396000 audit[4823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4810 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343363363432646339363364366466383536613866633934363430 Dec 16 12:26:28.397000 audit: BPF prog-id=207 op=LOAD Dec 16 12:26:28.397000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4810 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343363363432646339363364366466383536613866633934363430 Dec 16 12:26:28.397000 audit: BPF prog-id=208 op=LOAD Dec 16 12:26:28.397000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4810 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343363363432646339363364366466383536613866633934363430 Dec 16 12:26:28.397000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:26:28.397000 audit[4823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4810 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343363363432646339363364366466383536613866633934363430 Dec 16 12:26:28.397000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:26:28.397000 audit[4823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4810 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343363363432646339363364366466383536613866633934363430 Dec 16 12:26:28.397000 audit: BPF prog-id=209 op=LOAD Dec 16 12:26:28.397000 audit[4823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4810 pid=4823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632343363363432646339363364366466383536613866633934363430 Dec 16 12:26:28.420652 containerd[1706]: time="2025-12-16T12:26:28.420591174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h72ht,Uid:5b2b3263-cc70-4a4f-a835-4543e7a31ab8,Namespace:calico-system,Attempt:0,} returns sandbox id \"6243c642dc963d6df856a8fc94640b22c89e7d3b3fd643d4d9bf011123aeca51\"" Dec 16 12:26:28.605451 systemd-networkd[1602]: calib2854ab45d1: Gained IPv6LL Dec 16 12:26:28.652329 containerd[1706]: time="2025-12-16T12:26:28.652266001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:28.653728 containerd[1706]: time="2025-12-16T12:26:28.653691085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:26:28.653836 containerd[1706]: time="2025-12-16T12:26:28.653723205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:28.654007 kubelet[2901]: E1216 12:26:28.653975 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:28.654081 kubelet[2901]: E1216 12:26:28.654016 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:28.654306 kubelet[2901]: E1216 12:26:28.654215 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-9lrp5_calico-apiserver(06792be6-fad3-4b79-a250-73afb10c06a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:28.654306 kubelet[2901]: E1216 12:26:28.654257 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:26:28.654697 containerd[1706]: time="2025-12-16T12:26:28.654592008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:26:28.974536 containerd[1706]: time="2025-12-16T12:26:28.974344319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:28.977989 containerd[1706]: time="2025-12-16T12:26:28.977931370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:26:28.978134 containerd[1706]: time="2025-12-16T12:26:28.977991611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:28.978265 kubelet[2901]: E1216 12:26:28.978193 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:26:28.978265 kubelet[2901]: E1216 12:26:28.978245 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:26:28.978370 kubelet[2901]: E1216 12:26:28.978338 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h72ht_calico-system(5b2b3263-cc70-4a4f-a835-4543e7a31ab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:28.978405 kubelet[2901]: E1216 12:26:28.978369 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:26:29.058964 containerd[1706]: time="2025-12-16T12:26:29.058860311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dxpqs,Uid:d35c392d-b26a-4874-a153-e50017e8ee4f,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:29.061121 containerd[1706]: time="2025-12-16T12:26:29.060944118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dbdbb8c6-62gh6,Uid:ea12a60c-4683-4d5e-8e8f-9b466a85a781,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:29.193776 systemd-networkd[1602]: calicddce270300: Link UP Dec 16 12:26:29.194558 systemd-networkd[1602]: calicddce270300: Gained carrier Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.089 [INFO][4873] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.110 [INFO][4873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0 coredns-66bc5c9577- kube-system d35c392d-b26a-4874-a153-e50017e8ee4f 805 0 2025-12-16 12:25:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-7-179ea8c226 coredns-66bc5c9577-dxpqs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicddce270300 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Namespace="kube-system" Pod="coredns-66bc5c9577-dxpqs" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.111 [INFO][4873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Namespace="kube-system" Pod="coredns-66bc5c9577-dxpqs" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.143 [INFO][4902] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" HandleID="k8s-pod-network.01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Workload="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.143 [INFO][4902] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" HandleID="k8s-pod-network.01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Workload="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137480), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-7-179ea8c226", "pod":"coredns-66bc5c9577-dxpqs", "timestamp":"2025-12-16 12:26:29.143537064 +0000 UTC"}, Hostname:"ci-4515-1-0-7-179ea8c226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.143 [INFO][4902] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.143 [INFO][4902] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.143 [INFO][4902] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-7-179ea8c226' Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.154 [INFO][4902] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.160 [INFO][4902] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.166 [INFO][4902] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.170 [INFO][4902] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.173 [INFO][4902] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.173 [INFO][4902] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.174 [INFO][4902] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765 Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.181 [INFO][4902] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.189 [INFO][4902] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.71/26] block=192.168.122.64/26 handle="k8s-pod-network.01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.189 [INFO][4902] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.71/26] handle="k8s-pod-network.01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.189 [INFO][4902] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:29.209345 containerd[1706]: 2025-12-16 12:26:29.189 [INFO][4902] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.71/26] IPv6=[] ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" HandleID="k8s-pod-network.01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Workload="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" Dec 16 12:26:29.210220 containerd[1706]: 2025-12-16 12:26:29.190 [INFO][4873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Namespace="kube-system" Pod="coredns-66bc5c9577-dxpqs" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d35c392d-b26a-4874-a153-e50017e8ee4f", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"", Pod:"coredns-66bc5c9577-dxpqs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicddce270300", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:29.210220 containerd[1706]: 2025-12-16 12:26:29.191 [INFO][4873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.71/32] ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Namespace="kube-system" Pod="coredns-66bc5c9577-dxpqs" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" Dec 16 12:26:29.210220 containerd[1706]: 2025-12-16 12:26:29.191 [INFO][4873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicddce270300 ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Namespace="kube-system" Pod="coredns-66bc5c9577-dxpqs" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" Dec 16 12:26:29.210220 containerd[1706]: 2025-12-16 12:26:29.194 [INFO][4873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Namespace="kube-system" Pod="coredns-66bc5c9577-dxpqs" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" Dec 16 12:26:29.210220 containerd[1706]: 2025-12-16 12:26:29.194 [INFO][4873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Namespace="kube-system" Pod="coredns-66bc5c9577-dxpqs" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d35c392d-b26a-4874-a153-e50017e8ee4f", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765", Pod:"coredns-66bc5c9577-dxpqs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicddce270300", MAC:"72:db:81:77:7b:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:29.210458 kubelet[2901]: E1216 12:26:29.209877 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:26:29.210670 containerd[1706]: 2025-12-16 12:26:29.207 [INFO][4873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" Namespace="kube-system" Pod="coredns-66bc5c9577-dxpqs" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-coredns--66bc5c9577--dxpqs-eth0" Dec 16 12:26:29.213912 kubelet[2901]: E1216 12:26:29.213868 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:26:29.214050 kubelet[2901]: E1216 12:26:29.213961 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:26:29.243000 audit[4928]: NETFILTER_CFG table=filter:121 family=2 entries=16 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:29.243000 audit[4928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe3379e20 a2=0 a3=1 items=0 ppid=3030 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.243000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:29.249000 audit[4928]: NETFILTER_CFG table=nat:122 family=2 entries=18 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:29.249000 audit[4928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5004 a0=3 a1=ffffe3379e20 a2=0 a3=1 items=0 ppid=3030 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.249000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:29.251861 containerd[1706]: time="2025-12-16T12:26:29.251814053Z" level=info msg="connecting to shim 01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765" address="unix:///run/containerd/s/73843a8e9b436217d122bb09e0e7796760e759f795f61dac28ccb5c3b41ad0da" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:29.280538 systemd[1]: Started cri-containerd-01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765.scope - libcontainer container 01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765. Dec 16 12:26:29.295000 audit: BPF prog-id=210 op=LOAD Dec 16 12:26:29.296547 systemd-networkd[1602]: calia13eaf728ee: Link UP Dec 16 12:26:29.296000 audit: BPF prog-id=211 op=LOAD Dec 16 12:26:29.297101 systemd-networkd[1602]: calia13eaf728ee: Gained carrier Dec 16 12:26:29.296000 audit[4952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4939 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031646336396433346531653466323739646635393066623539323539 Dec 16 12:26:29.297000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:26:29.297000 audit[4952]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4939 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031646336396433346531653466323739646635393066623539323539 Dec 16 12:26:29.297000 audit: BPF prog-id=212 op=LOAD Dec 16 12:26:29.297000 audit[4952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4939 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031646336396433346531653466323739646635393066623539323539 Dec 16 12:26:29.297000 audit: BPF prog-id=213 op=LOAD Dec 16 12:26:29.297000 audit[4952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4939 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031646336396433346531653466323739646635393066623539323539 Dec 16 12:26:29.298000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:26:29.298000 audit[4952]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4939 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031646336396433346531653466323739646635393066623539323539 Dec 16 12:26:29.298000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:26:29.298000 audit[4952]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4939 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031646336396433346531653466323739646635393066623539323539 Dec 16 12:26:29.298000 audit: BPF prog-id=214 op=LOAD Dec 16 12:26:29.298000 audit[4952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4939 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031646336396433346531653466323739646635393066623539323539 Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.101 [INFO][4885] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.122 [INFO][4885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0 calico-apiserver-65dbdbb8c6- calico-apiserver ea12a60c-4683-4d5e-8e8f-9b466a85a781 808 0 2025-12-16 12:25:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65dbdbb8c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-7-179ea8c226 calico-apiserver-65dbdbb8c6-62gh6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia13eaf728ee [] [] }} ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-62gh6" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.122 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-62gh6" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.150 [INFO][4908] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" HandleID="k8s-pod-network.c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Workload="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.150 [INFO][4908] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" HandleID="k8s-pod-network.c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Workload="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-7-179ea8c226", "pod":"calico-apiserver-65dbdbb8c6-62gh6", "timestamp":"2025-12-16 12:26:29.150517287 +0000 UTC"}, Hostname:"ci-4515-1-0-7-179ea8c226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.151 [INFO][4908] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.189 [INFO][4908] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.189 [INFO][4908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-7-179ea8c226' Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.257 [INFO][4908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.265 [INFO][4908] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.271 [INFO][4908] ipam/ipam.go 511: Trying affinity for 192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.273 [INFO][4908] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.277 [INFO][4908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.64/26 host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.277 [INFO][4908] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.64/26 handle="k8s-pod-network.c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.279 [INFO][4908] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.284 [INFO][4908] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.64/26 handle="k8s-pod-network.c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.292 [INFO][4908] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.72/26] block=192.168.122.64/26 handle="k8s-pod-network.c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.292 [INFO][4908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.72/26] handle="k8s-pod-network.c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" host="ci-4515-1-0-7-179ea8c226" Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.292 [INFO][4908] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:29.310610 containerd[1706]: 2025-12-16 12:26:29.292 [INFO][4908] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.72/26] IPv6=[] ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" HandleID="k8s-pod-network.c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Workload="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" Dec 16 12:26:29.311501 containerd[1706]: 2025-12-16 12:26:29.294 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-62gh6" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0", GenerateName:"calico-apiserver-65dbdbb8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea12a60c-4683-4d5e-8e8f-9b466a85a781", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dbdbb8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"", Pod:"calico-apiserver-65dbdbb8c6-62gh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia13eaf728ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:29.311501 containerd[1706]: 2025-12-16 12:26:29.294 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.72/32] ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-62gh6" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" Dec 16 12:26:29.311501 containerd[1706]: 2025-12-16 12:26:29.295 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia13eaf728ee ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-62gh6" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" Dec 16 12:26:29.311501 containerd[1706]: 2025-12-16 12:26:29.297 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-62gh6" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" Dec 16 12:26:29.311501 containerd[1706]: 2025-12-16 12:26:29.298 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-62gh6" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0", GenerateName:"calico-apiserver-65dbdbb8c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea12a60c-4683-4d5e-8e8f-9b466a85a781", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dbdbb8c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-7-179ea8c226", ContainerID:"c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d", Pod:"calico-apiserver-65dbdbb8c6-62gh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia13eaf728ee", MAC:"42:3c:de:cb:56:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:29.311501 containerd[1706]: 2025-12-16 12:26:29.308 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" Namespace="calico-apiserver" Pod="calico-apiserver-65dbdbb8c6-62gh6" WorkloadEndpoint="ci--4515--1--0--7--179ea8c226-k8s-calico--apiserver--65dbdbb8c6--62gh6-eth0" Dec 16 12:26:29.339521 containerd[1706]: time="2025-12-16T12:26:29.339460936Z" level=info msg="connecting to shim c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d" address="unix:///run/containerd/s/548ebd210997b4aacaf0dbbd37fe0691c34b1afafa8ced9fab5b3f86842a9a02" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:29.339920 containerd[1706]: time="2025-12-16T12:26:29.339873057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dxpqs,Uid:d35c392d-b26a-4874-a153-e50017e8ee4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765\"" Dec 16 12:26:29.345825 containerd[1706]: time="2025-12-16T12:26:29.345781476Z" level=info msg="CreateContainer within sandbox \"01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:26:29.359161 containerd[1706]: time="2025-12-16T12:26:29.359105079Z" level=info msg="Container 601d811c18177174b881b6bf5a278ae5a67ad465860b4f679bb0de363bd43a54: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:29.365663 containerd[1706]: time="2025-12-16T12:26:29.365624220Z" level=info msg="CreateContainer within sandbox \"01dc69d34e1e4f279df590fb59259dacc2c90cf9e71c89b03bf043ed690a7765\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"601d811c18177174b881b6bf5a278ae5a67ad465860b4f679bb0de363bd43a54\"" Dec 16 12:26:29.366119 containerd[1706]: time="2025-12-16T12:26:29.366091901Z" level=info msg="StartContainer for \"601d811c18177174b881b6bf5a278ae5a67ad465860b4f679bb0de363bd43a54\"" Dec 16 12:26:29.366951 containerd[1706]: time="2025-12-16T12:26:29.366925344Z" level=info msg="connecting to shim 601d811c18177174b881b6bf5a278ae5a67ad465860b4f679bb0de363bd43a54" address="unix:///run/containerd/s/73843a8e9b436217d122bb09e0e7796760e759f795f61dac28ccb5c3b41ad0da" protocol=ttrpc version=3 Dec 16 12:26:29.367601 systemd[1]: Started cri-containerd-c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d.scope - libcontainer container c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d. Dec 16 12:26:29.380000 audit: BPF prog-id=215 op=LOAD Dec 16 12:26:29.381000 audit: BPF prog-id=216 op=LOAD Dec 16 12:26:29.381000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4991 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306138626639303238383437323032376566383666643135393465 Dec 16 12:26:29.381000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:26:29.381000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4991 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306138626639303238383437323032376566383666643135393465 Dec 16 12:26:29.381000 audit: BPF prog-id=217 op=LOAD Dec 16 12:26:29.381000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4991 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306138626639303238383437323032376566383666643135393465 Dec 16 12:26:29.381000 audit: BPF prog-id=218 op=LOAD Dec 16 12:26:29.381000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4991 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306138626639303238383437323032376566383666643135393465 Dec 16 12:26:29.381000 audit: BPF prog-id=218 op=UNLOAD Dec 16 12:26:29.381000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4991 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306138626639303238383437323032376566383666643135393465 Dec 16 12:26:29.381000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:26:29.381000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4991 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306138626639303238383437323032376566383666643135393465 Dec 16 12:26:29.381000 audit: BPF prog-id=219 op=LOAD Dec 16 12:26:29.381000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4991 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306138626639303238383437323032376566383666643135393465 Dec 16 12:26:29.389608 systemd[1]: Started cri-containerd-601d811c18177174b881b6bf5a278ae5a67ad465860b4f679bb0de363bd43a54.scope - libcontainer container 601d811c18177174b881b6bf5a278ae5a67ad465860b4f679bb0de363bd43a54. Dec 16 12:26:29.405000 audit: BPF prog-id=220 op=LOAD Dec 16 12:26:29.405000 audit: BPF prog-id=221 op=LOAD Dec 16 12:26:29.405000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4939 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630316438313163313831373731373462383831623662663561323738 Dec 16 12:26:29.405000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:26:29.405000 audit[5014]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4939 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630316438313163313831373731373462383831623662663561323738 Dec 16 12:26:29.405000 audit: BPF prog-id=222 op=LOAD Dec 16 12:26:29.405000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4939 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630316438313163313831373731373462383831623662663561323738 Dec 16 12:26:29.405000 audit: BPF prog-id=223 op=LOAD Dec 16 12:26:29.405000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4939 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630316438313163313831373731373462383831623662663561323738 Dec 16 12:26:29.405000 audit: BPF prog-id=223 op=UNLOAD Dec 16 12:26:29.405000 audit[5014]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4939 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630316438313163313831373731373462383831623662663561323738 Dec 16 12:26:29.405000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:26:29.405000 audit[5014]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4939 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630316438313163313831373731373462383831623662663561323738 Dec 16 12:26:29.406000 audit: BPF prog-id=224 op=LOAD Dec 16 12:26:29.406000 audit[5014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4939 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630316438313163313831373731373462383831623662663561323738 Dec 16 12:26:29.413968 containerd[1706]: time="2025-12-16T12:26:29.413917096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dbdbb8c6-62gh6,Uid:ea12a60c-4683-4d5e-8e8f-9b466a85a781,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c80a8bf90288472027ef86fd1594eec6c144334919d8c06bfab724772195eb5d\"" Dec 16 12:26:29.419501 containerd[1706]: time="2025-12-16T12:26:29.419448873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:26:29.431097 containerd[1706]: time="2025-12-16T12:26:29.431061831Z" level=info msg="StartContainer for \"601d811c18177174b881b6bf5a278ae5a67ad465860b4f679bb0de363bd43a54\" returns successfully" Dec 16 12:26:29.754010 containerd[1706]: time="2025-12-16T12:26:29.753939752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:29.755204 containerd[1706]: time="2025-12-16T12:26:29.755152156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:26:29.755264 containerd[1706]: time="2025-12-16T12:26:29.755170396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:29.755499 kubelet[2901]: E1216 12:26:29.755443 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:29.755499 kubelet[2901]: E1216 12:26:29.755491 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:29.755607 kubelet[2901]: E1216 12:26:29.755563 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-62gh6_calico-apiserver(ea12a60c-4683-4d5e-8e8f-9b466a85a781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:29.755635 kubelet[2901]: E1216 12:26:29.755598 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:26:30.077505 systemd-networkd[1602]: cali8398fce498d: Gained IPv6LL Dec 16 12:26:30.141520 systemd-networkd[1602]: calif0473e8db08: Gained IPv6LL Dec 16 12:26:30.217917 kubelet[2901]: E1216 12:26:30.217869 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:26:30.225527 kubelet[2901]: E1216 12:26:30.225468 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:26:30.225808 kubelet[2901]: E1216 12:26:30.225642 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:26:30.261000 audit[5090]: NETFILTER_CFG table=filter:123 family=2 entries=16 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:30.262522 kernel: kauditd_printk_skb: 200 callbacks suppressed Dec 16 12:26:30.262600 kernel: audit: type=1325 audit(1765887990.261:667): table=filter:123 family=2 entries=16 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:30.261000 audit[5090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdf034c60 a2=0 a3=1 items=0 ppid=3030 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:30.267202 kubelet[2901]: I1216 12:26:30.267125 2901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dxpqs" podStartSLOduration=43.267105966 podStartE2EDuration="43.267105966s" podCreationTimestamp="2025-12-16 12:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:30.266105282 +0000 UTC m=+50.299300435" watchObservedRunningTime="2025-12-16 12:26:30.267105966 +0000 UTC m=+50.300301119" Dec 16 12:26:30.268660 kernel: audit: type=1300 audit(1765887990.261:667): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdf034c60 a2=0 a3=1 items=0 ppid=3030 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:30.268821 kernel: audit: type=1327 audit(1765887990.261:667): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:30.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:30.276000 audit[5090]: NETFILTER_CFG table=nat:124 family=2 entries=18 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:30.276000 audit[5090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5004 a0=3 a1=ffffdf034c60 a2=0 a3=1 items=0 ppid=3030 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:30.283332 kernel: audit: type=1325 audit(1765887990.276:668): table=nat:124 family=2 entries=18 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:30.283482 kernel: audit: type=1300 audit(1765887990.276:668): arch=c00000b7 syscall=211 success=yes exit=5004 a0=3 a1=ffffdf034c60 a2=0 a3=1 items=0 ppid=3030 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:30.283518 kernel: audit: type=1327 audit(1765887990.276:668): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:30.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:30.525558 systemd-networkd[1602]: calicddce270300: Gained IPv6LL Dec 16 12:26:30.589453 systemd-networkd[1602]: calia13eaf728ee: Gained IPv6LL Dec 16 12:26:31.190047 kubelet[2901]: I1216 12:26:31.189756 2901 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:26:31.227558 kubelet[2901]: E1216 12:26:31.227485 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:26:31.294000 audit[5117]: NETFILTER_CFG table=filter:125 family=2 entries=15 op=nft_register_rule pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:31.294000 audit[5117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe840f050 a2=0 a3=1 items=0 ppid=3030 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.301357 kernel: audit: type=1325 audit(1765887991.294:669): table=filter:125 family=2 entries=15 op=nft_register_rule pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:31.301423 kernel: audit: type=1300 audit(1765887991.294:669): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe840f050 a2=0 a3=1 items=0 ppid=3030 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.301454 kernel: audit: type=1327 audit(1765887991.294:669): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:31.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:31.308000 audit[5117]: NETFILTER_CFG table=nat:126 family=2 entries=61 op=nft_register_chain pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:31.308000 audit[5117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22668 a0=3 a1=ffffe840f050 a2=0 a3=1 items=0 ppid=3030 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:31.311315 kernel: audit: type=1325 audit(1765887991.308:670): table=nat:126 family=2 entries=61 op=nft_register_chain pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:31.632000 audit: BPF prog-id=225 op=LOAD Dec 16 12:26:31.632000 audit[5136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe19162d8 a2=98 a3=ffffe19162c8 items=0 ppid=5119 pid=5136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.632000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:31.633000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:26:31.633000 audit[5136]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe19162a8 a3=0 items=0 ppid=5119 pid=5136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.633000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:31.633000 audit: BPF prog-id=226 op=LOAD Dec 16 12:26:31.633000 audit[5136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe1916188 a2=74 a3=95 items=0 ppid=5119 pid=5136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.633000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:31.633000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:26:31.633000 audit[5136]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5119 pid=5136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.633000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:31.633000 audit: BPF prog-id=227 op=LOAD Dec 16 12:26:31.633000 audit[5136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe19161b8 a2=40 a3=ffffe19161e8 items=0 ppid=5119 pid=5136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.633000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:31.633000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:26:31.633000 audit[5136]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe19161e8 items=0 ppid=5119 pid=5136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.633000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:31.635000 audit: BPF prog-id=228 op=LOAD Dec 16 12:26:31.635000 audit[5137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5d23a08 a2=98 a3=fffff5d239f8 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.635000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.635000 audit: BPF prog-id=228 op=UNLOAD Dec 16 12:26:31.635000 audit[5137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff5d239d8 a3=0 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.635000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.635000 audit: BPF prog-id=229 op=LOAD Dec 16 12:26:31.635000 audit[5137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff5d23698 a2=74 a3=95 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.635000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.635000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:26:31.635000 audit[5137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.635000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.635000 audit: BPF prog-id=230 op=LOAD Dec 16 12:26:31.635000 audit[5137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff5d236f8 a2=94 a3=2 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.635000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.635000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:26:31.635000 audit[5137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.635000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.737000 audit: BPF prog-id=231 op=LOAD Dec 16 12:26:31.737000 audit[5137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff5d236b8 a2=40 a3=fffff5d236e8 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.737000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:26:31.737000 audit[5137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff5d236e8 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.747000 audit: BPF prog-id=232 op=LOAD Dec 16 12:26:31.747000 audit[5137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff5d236c8 a2=94 a3=4 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.747000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:26:31.747000 audit[5137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.747000 audit: BPF prog-id=233 op=LOAD Dec 16 12:26:31.747000 audit[5137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff5d23508 a2=94 a3=5 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.747000 audit: BPF prog-id=233 op=UNLOAD Dec 16 12:26:31.747000 audit[5137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.747000 audit: BPF prog-id=234 op=LOAD Dec 16 12:26:31.747000 audit[5137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff5d23738 a2=94 a3=6 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.748000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:26:31.748000 audit[5137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.748000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.748000 audit: BPF prog-id=235 op=LOAD Dec 16 12:26:31.748000 audit[5137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff5d22f08 a2=94 a3=83 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.748000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.748000 audit: BPF prog-id=236 op=LOAD Dec 16 12:26:31.748000 audit[5137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff5d22cc8 a2=94 a3=2 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.748000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.748000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:26:31.748000 audit[5137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.748000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.749000 audit: BPF prog-id=235 op=UNLOAD Dec 16 12:26:31.749000 audit[5137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=28bca620 a3=28bbdb00 items=0 ppid=5119 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.749000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:31.758000 audit: BPF prog-id=237 op=LOAD Dec 16 12:26:31.758000 audit[5140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff579a288 a2=98 a3=fffff579a278 items=0 ppid=5119 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.758000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:31.758000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:26:31.758000 audit[5140]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff579a258 a3=0 items=0 ppid=5119 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.758000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:31.758000 audit: BPF prog-id=238 op=LOAD Dec 16 12:26:31.758000 audit[5140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff579a138 a2=74 a3=95 items=0 ppid=5119 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.758000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:31.758000 audit: BPF prog-id=238 op=UNLOAD Dec 16 12:26:31.758000 audit[5140]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5119 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.758000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:31.759000 audit: BPF prog-id=239 op=LOAD Dec 16 12:26:31.759000 audit[5140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff579a168 a2=40 a3=fffff579a198 items=0 ppid=5119 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.759000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:31.759000 audit: BPF prog-id=239 op=UNLOAD Dec 16 12:26:31.759000 audit[5140]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff579a198 items=0 ppid=5119 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.759000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:31.815303 systemd-networkd[1602]: vxlan.calico: Link UP Dec 16 12:26:31.815310 systemd-networkd[1602]: vxlan.calico: Gained carrier Dec 16 12:26:31.840000 audit: BPF prog-id=240 op=LOAD Dec 16 12:26:31.840000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff38737f8 a2=98 a3=fffff38737e8 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.840000 audit: BPF prog-id=240 op=UNLOAD Dec 16 12:26:31.840000 audit[5165]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff38737c8 a3=0 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.840000 audit: BPF prog-id=241 op=LOAD Dec 16 12:26:31.840000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff38734d8 a2=74 a3=95 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.840000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:26:31.840000 audit[5165]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.840000 audit: BPF prog-id=242 op=LOAD Dec 16 12:26:31.840000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3873538 a2=94 a3=2 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.840000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:26:31.840000 audit[5165]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.840000 audit: BPF prog-id=243 op=LOAD Dec 16 12:26:31.840000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff38733b8 a2=40 a3=fffff38733e8 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.840000 audit: BPF prog-id=243 op=UNLOAD Dec 16 12:26:31.840000 audit[5165]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff38733e8 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.840000 audit: BPF prog-id=244 op=LOAD Dec 16 12:26:31.840000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff3873508 a2=94 a3=b7 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.841000 audit: BPF prog-id=244 op=UNLOAD Dec 16 12:26:31.841000 audit[5165]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.841000 audit: BPF prog-id=245 op=LOAD Dec 16 12:26:31.841000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff3872bb8 a2=94 a3=2 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.841000 audit: BPF prog-id=245 op=UNLOAD Dec 16 12:26:31.841000 audit[5165]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.841000 audit: BPF prog-id=246 op=LOAD Dec 16 12:26:31.841000 audit[5165]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff3872d48 a2=94 a3=30 items=0 ppid=5119 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:31.844000 audit: BPF prog-id=247 op=LOAD Dec 16 12:26:31.844000 audit[5168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd8510878 a2=98 a3=ffffd8510868 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.844000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:26:31.844000 audit[5168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd8510848 a3=0 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.844000 audit: BPF prog-id=248 op=LOAD Dec 16 12:26:31.844000 audit[5168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd8510508 a2=74 a3=95 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.844000 audit: BPF prog-id=248 op=UNLOAD Dec 16 12:26:31.844000 audit[5168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.844000 audit: BPF prog-id=249 op=LOAD Dec 16 12:26:31.844000 audit[5168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd8510568 a2=94 a3=2 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.844000 audit: BPF prog-id=249 op=UNLOAD Dec 16 12:26:31.844000 audit[5168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.981000 audit: BPF prog-id=250 op=LOAD Dec 16 12:26:31.981000 audit[5168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd8510528 a2=40 a3=ffffd8510558 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.981000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.981000 audit: BPF prog-id=250 op=UNLOAD Dec 16 12:26:31.981000 audit[5168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd8510558 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.981000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.993000 audit: BPF prog-id=251 op=LOAD Dec 16 12:26:31.993000 audit[5168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd8510538 a2=94 a3=4 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.993000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:26:31.993000 audit[5168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.993000 audit: BPF prog-id=252 op=LOAD Dec 16 12:26:31.993000 audit[5168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd8510378 a2=94 a3=5 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.993000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:26:31.993000 audit[5168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.993000 audit: BPF prog-id=253 op=LOAD Dec 16 12:26:31.993000 audit[5168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd85105a8 a2=94 a3=6 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.994000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:26:31.994000 audit[5168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.994000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.994000 audit: BPF prog-id=254 op=LOAD Dec 16 12:26:31.994000 audit[5168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd850fd78 a2=94 a3=83 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.994000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.994000 audit: BPF prog-id=255 op=LOAD Dec 16 12:26:31.994000 audit[5168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd850fb38 a2=94 a3=2 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.994000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.994000 audit: BPF prog-id=255 op=UNLOAD Dec 16 12:26:31.994000 audit[5168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.994000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:31.995000 audit: BPF prog-id=254 op=UNLOAD Dec 16 12:26:31.995000 audit[5168]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=30a0d620 a3=30a00b00 items=0 ppid=5119 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:31.995000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:32.010000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:26:32.010000 audit[5119]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40006a7900 a2=0 a3=0 items=0 ppid=4162 pid=5119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:32.010000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:26:32.080000 audit[5242]: NETFILTER_CFG table=nat:127 family=2 entries=15 op=nft_register_chain pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:32.080000 audit[5242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc50345d0 a2=0 a3=ffff8054afa8 items=0 ppid=5119 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:32.080000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:32.084000 audit[5243]: NETFILTER_CFG table=mangle:128 family=2 entries=16 op=nft_register_chain pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:32.084000 audit[5243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff7739040 a2=0 a3=ffff93a04fa8 items=0 ppid=5119 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:32.084000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:32.090000 audit[5241]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=5241 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:32.090000 audit[5241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffea744de0 a2=0 a3=ffffa2ec2fa8 items=0 ppid=5119 pid=5241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:32.090000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:32.105000 audit[5245]: NETFILTER_CFG table=filter:130 family=2 entries=321 op=nft_register_chain pid=5245 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:32.105000 audit[5245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=190616 a0=3 a1=fffff310ecc0 a2=0 a3=ffffba683fa8 items=0 ppid=5119 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:32.105000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:32.486432 kubelet[2901]: I1216 12:26:32.486392 2901 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:26:33.725572 systemd-networkd[1602]: vxlan.calico: Gained IPv6LL Dec 16 12:26:34.057608 containerd[1706]: time="2025-12-16T12:26:34.057465063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:26:34.415141 containerd[1706]: time="2025-12-16T12:26:34.415012655Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:34.416445 containerd[1706]: time="2025-12-16T12:26:34.416353619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:26:34.416445 containerd[1706]: time="2025-12-16T12:26:34.416388899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:34.416670 kubelet[2901]: E1216 12:26:34.416579 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:26:34.416670 kubelet[2901]: E1216 12:26:34.416656 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:26:34.417153 kubelet[2901]: E1216 12:26:34.416736 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:34.418161 containerd[1706]: time="2025-12-16T12:26:34.418129985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:26:34.777402 containerd[1706]: time="2025-12-16T12:26:34.777324303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:34.778895 containerd[1706]: time="2025-12-16T12:26:34.778779867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:26:34.779664 containerd[1706]: time="2025-12-16T12:26:34.778875708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:34.779739 kubelet[2901]: E1216 12:26:34.779090 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:26:34.779739 kubelet[2901]: E1216 12:26:34.779132 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:26:34.779739 kubelet[2901]: E1216 12:26:34.779194 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:34.779739 kubelet[2901]: E1216 12:26:34.779229 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:26:40.057650 containerd[1706]: time="2025-12-16T12:26:40.057585722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:26:40.504588 containerd[1706]: time="2025-12-16T12:26:40.504018481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:40.506034 containerd[1706]: time="2025-12-16T12:26:40.505976247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:26:40.506237 containerd[1706]: time="2025-12-16T12:26:40.506198088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:40.506650 kubelet[2901]: E1216 12:26:40.506454 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:26:40.506650 kubelet[2901]: E1216 12:26:40.506502 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:26:40.506650 kubelet[2901]: E1216 12:26:40.506579 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-758f6dbc5-vnxvc_calico-system(f8c50491-6041-443b-aedf-13a9fee1a718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:40.506650 kubelet[2901]: E1216 12:26:40.506612 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:26:41.057722 containerd[1706]: time="2025-12-16T12:26:41.057653385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:26:41.454215 containerd[1706]: time="2025-12-16T12:26:41.453982463Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:41.455647 containerd[1706]: time="2025-12-16T12:26:41.455605668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:26:41.455727 containerd[1706]: time="2025-12-16T12:26:41.455692068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:41.455955 kubelet[2901]: E1216 12:26:41.455895 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:26:41.455955 kubelet[2901]: E1216 12:26:41.455951 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:26:41.456051 kubelet[2901]: E1216 12:26:41.456030 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h72ht_calico-system(5b2b3263-cc70-4a4f-a835-4543e7a31ab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:41.456087 kubelet[2901]: E1216 12:26:41.456066 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:26:42.057860 containerd[1706]: time="2025-12-16T12:26:42.057729729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:26:42.409769 containerd[1706]: time="2025-12-16T12:26:42.409486542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:42.411437 containerd[1706]: time="2025-12-16T12:26:42.411370069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:26:42.411537 containerd[1706]: time="2025-12-16T12:26:42.411436509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:42.411666 kubelet[2901]: E1216 12:26:42.411609 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:42.412274 kubelet[2901]: E1216 12:26:42.411676 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:42.412342 containerd[1706]: time="2025-12-16T12:26:42.411976551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:26:42.412537 kubelet[2901]: E1216 12:26:42.412444 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-9lrp5_calico-apiserver(06792be6-fad3-4b79-a250-73afb10c06a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:42.412537 kubelet[2901]: E1216 12:26:42.412492 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:26:42.777556 containerd[1706]: time="2025-12-16T12:26:42.777489129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:42.778961 containerd[1706]: time="2025-12-16T12:26:42.778893613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:26:42.779072 containerd[1706]: time="2025-12-16T12:26:42.778995653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:42.779258 kubelet[2901]: E1216 12:26:42.779212 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:26:42.779330 kubelet[2901]: E1216 12:26:42.779261 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:26:42.779375 kubelet[2901]: E1216 12:26:42.779356 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:42.781462 containerd[1706]: time="2025-12-16T12:26:42.781100380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:26:43.098497 containerd[1706]: time="2025-12-16T12:26:43.098199282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:43.099631 containerd[1706]: time="2025-12-16T12:26:43.099567727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:26:43.100043 containerd[1706]: time="2025-12-16T12:26:43.099625447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:43.100089 kubelet[2901]: E1216 12:26:43.099803 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:26:43.100089 kubelet[2901]: E1216 12:26:43.099863 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:26:43.100089 kubelet[2901]: E1216 12:26:43.099936 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:43.100089 kubelet[2901]: E1216 12:26:43.099974 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:45.057382 containerd[1706]: time="2025-12-16T12:26:45.056935836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:26:45.437340 containerd[1706]: time="2025-12-16T12:26:45.437173741Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:45.438861 containerd[1706]: time="2025-12-16T12:26:45.438814707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:26:45.438935 containerd[1706]: time="2025-12-16T12:26:45.438872627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:45.439088 kubelet[2901]: E1216 12:26:45.439030 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:45.439088 kubelet[2901]: E1216 12:26:45.439082 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:45.439652 kubelet[2901]: E1216 12:26:45.439157 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-62gh6_calico-apiserver(ea12a60c-4683-4d5e-8e8f-9b466a85a781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:45.439652 kubelet[2901]: E1216 12:26:45.439194 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:26:48.062923 kubelet[2901]: E1216 12:26:48.062736 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:26:52.058315 kubelet[2901]: E1216 12:26:52.058035 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:26:53.056750 kubelet[2901]: E1216 12:26:53.056704 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:26:54.057765 kubelet[2901]: E1216 12:26:54.057707 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:26:57.057028 kubelet[2901]: E1216 12:26:57.056966 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:26:58.056981 kubelet[2901]: E1216 12:26:58.056902 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:27:01.060440 containerd[1706]: time="2025-12-16T12:27:01.060394537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:27:01.412026 containerd[1706]: time="2025-12-16T12:27:01.411892390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:01.413829 containerd[1706]: time="2025-12-16T12:27:01.413722796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:27:01.413829 containerd[1706]: time="2025-12-16T12:27:01.413771156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:01.414000 kubelet[2901]: E1216 12:27:01.413959 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:27:01.414544 kubelet[2901]: E1216 12:27:01.414008 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:27:01.414544 kubelet[2901]: E1216 12:27:01.414087 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:01.414953 containerd[1706]: time="2025-12-16T12:27:01.414923280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:27:01.758148 containerd[1706]: time="2025-12-16T12:27:01.758094706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:01.759610 containerd[1706]: time="2025-12-16T12:27:01.759572231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:27:01.759686 containerd[1706]: time="2025-12-16T12:27:01.759627431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:01.759848 kubelet[2901]: E1216 12:27:01.759809 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:27:01.759914 kubelet[2901]: E1216 12:27:01.759855 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:27:01.759945 kubelet[2901]: E1216 12:27:01.759923 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:01.759999 kubelet[2901]: E1216 12:27:01.759958 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:27:04.058452 containerd[1706]: time="2025-12-16T12:27:04.058399921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:27:04.409435 containerd[1706]: time="2025-12-16T12:27:04.408165048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:04.411310 containerd[1706]: time="2025-12-16T12:27:04.411204298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:27:04.411421 containerd[1706]: time="2025-12-16T12:27:04.411282938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:04.411519 kubelet[2901]: E1216 12:27:04.411480 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:27:04.412385 kubelet[2901]: E1216 12:27:04.411523 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:27:04.412385 kubelet[2901]: E1216 12:27:04.411601 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h72ht_calico-system(5b2b3263-cc70-4a4f-a835-4543e7a31ab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:04.412385 kubelet[2901]: E1216 12:27:04.411632 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:27:06.058373 containerd[1706]: time="2025-12-16T12:27:06.058330127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:27:06.422581 containerd[1706]: time="2025-12-16T12:27:06.422447780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:06.425328 containerd[1706]: time="2025-12-16T12:27:06.425251189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:27:06.425606 containerd[1706]: time="2025-12-16T12:27:06.425431790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:06.425776 kubelet[2901]: E1216 12:27:06.425739 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:27:06.426063 kubelet[2901]: E1216 12:27:06.425787 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:27:06.426063 kubelet[2901]: E1216 12:27:06.425867 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-758f6dbc5-vnxvc_calico-system(f8c50491-6041-443b-aedf-13a9fee1a718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:06.426063 kubelet[2901]: E1216 12:27:06.425898 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:27:08.059667 containerd[1706]: time="2025-12-16T12:27:08.059462417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:27:08.408972 containerd[1706]: time="2025-12-16T12:27:08.408547822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:08.410396 containerd[1706]: time="2025-12-16T12:27:08.410266267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:27:08.410396 containerd[1706]: time="2025-12-16T12:27:08.410320147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:08.410574 kubelet[2901]: E1216 12:27:08.410504 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:08.410574 kubelet[2901]: E1216 12:27:08.410551 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:08.410920 kubelet[2901]: E1216 12:27:08.410632 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:08.412670 containerd[1706]: time="2025-12-16T12:27:08.412543075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:27:08.771274 containerd[1706]: time="2025-12-16T12:27:08.771081510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:08.775925 containerd[1706]: time="2025-12-16T12:27:08.775548085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:27:08.776213 containerd[1706]: time="2025-12-16T12:27:08.775553285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:08.778663 kubelet[2901]: E1216 12:27:08.778446 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:08.778663 kubelet[2901]: E1216 12:27:08.778496 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:08.778663 kubelet[2901]: E1216 12:27:08.778628 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:08.778820 kubelet[2901]: E1216 12:27:08.778722 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:27:10.058395 containerd[1706]: time="2025-12-16T12:27:10.058269139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:27:10.589585 containerd[1706]: time="2025-12-16T12:27:10.589521691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:10.593263 containerd[1706]: time="2025-12-16T12:27:10.593195623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:10.593536 containerd[1706]: time="2025-12-16T12:27:10.593195823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:27:10.593751 kubelet[2901]: E1216 12:27:10.593689 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:10.593751 kubelet[2901]: E1216 12:27:10.593744 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:10.594052 kubelet[2901]: E1216 12:27:10.593826 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-62gh6_calico-apiserver(ea12a60c-4683-4d5e-8e8f-9b466a85a781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:10.594052 kubelet[2901]: E1216 12:27:10.593857 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:27:12.058611 containerd[1706]: time="2025-12-16T12:27:12.058269945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:27:12.395235 containerd[1706]: time="2025-12-16T12:27:12.395106791Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:12.397237 containerd[1706]: time="2025-12-16T12:27:12.397174678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:27:12.397469 containerd[1706]: time="2025-12-16T12:27:12.397201278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:12.397814 kubelet[2901]: E1216 12:27:12.397752 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:12.398113 kubelet[2901]: E1216 12:27:12.397821 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:12.398113 kubelet[2901]: E1216 12:27:12.397905 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-9lrp5_calico-apiserver(06792be6-fad3-4b79-a250-73afb10c06a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:12.398113 kubelet[2901]: E1216 12:27:12.398010 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:27:13.058347 kubelet[2901]: E1216 12:27:13.058241 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:27:15.058026 kubelet[2901]: E1216 12:27:15.057964 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:27:19.057000 kubelet[2901]: E1216 12:27:19.056937 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:27:22.059105 kubelet[2901]: E1216 12:27:22.059021 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:27:24.057929 kubelet[2901]: E1216 12:27:24.057538 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:27:24.057929 kubelet[2901]: E1216 12:27:24.057677 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:27:24.060339 kubelet[2901]: E1216 12:27:24.060171 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:27:28.057977 kubelet[2901]: E1216 12:27:28.057696 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:27:32.057591 kubelet[2901]: E1216 12:27:32.057462 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:27:33.057224 kubelet[2901]: E1216 12:27:33.057136 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:27:36.057476 kubelet[2901]: E1216 12:27:36.057418 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:27:37.057729 kubelet[2901]: E1216 12:27:37.057541 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:27:39.058161 kubelet[2901]: E1216 12:27:39.057080 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:27:42.058715 kubelet[2901]: E1216 12:27:42.058615 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:27:47.057860 containerd[1706]: time="2025-12-16T12:27:47.057815915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:27:47.478561 containerd[1706]: time="2025-12-16T12:27:47.478435430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:47.480014 containerd[1706]: time="2025-12-16T12:27:47.479945715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:27:47.480123 containerd[1706]: time="2025-12-16T12:27:47.480011596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:47.481136 kubelet[2901]: E1216 12:27:47.481092 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:27:47.481858 kubelet[2901]: E1216 12:27:47.481155 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:27:47.481858 kubelet[2901]: E1216 12:27:47.481247 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-758f6dbc5-vnxvc_calico-system(f8c50491-6041-443b-aedf-13a9fee1a718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:47.481858 kubelet[2901]: E1216 12:27:47.481621 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:27:48.060130 kubelet[2901]: E1216 12:27:48.060007 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:27:51.057386 kubelet[2901]: E1216 12:27:51.057325 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:27:51.058439 containerd[1706]: time="2025-12-16T12:27:51.057885008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:27:51.392941 containerd[1706]: time="2025-12-16T12:27:51.392757167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:51.394912 containerd[1706]: time="2025-12-16T12:27:51.394767173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:27:51.394912 containerd[1706]: time="2025-12-16T12:27:51.394854454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:51.395038 kubelet[2901]: E1216 12:27:51.395006 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:51.395078 kubelet[2901]: E1216 12:27:51.395051 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:51.395164 kubelet[2901]: E1216 12:27:51.395115 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-62gh6_calico-apiserver(ea12a60c-4683-4d5e-8e8f-9b466a85a781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:51.395164 kubelet[2901]: E1216 12:27:51.395152 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:27:52.059655 containerd[1706]: time="2025-12-16T12:27:52.059138035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:27:52.407575 containerd[1706]: time="2025-12-16T12:27:52.407443117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:52.409250 containerd[1706]: time="2025-12-16T12:27:52.409203403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:27:52.409355 containerd[1706]: time="2025-12-16T12:27:52.409285883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:52.409542 kubelet[2901]: E1216 12:27:52.409500 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:27:52.409815 kubelet[2901]: E1216 12:27:52.409553 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:27:52.409815 kubelet[2901]: E1216 12:27:52.409661 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:52.410823 containerd[1706]: time="2025-12-16T12:27:52.410791768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:27:52.788608 containerd[1706]: time="2025-12-16T12:27:52.788472026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:52.789958 containerd[1706]: time="2025-12-16T12:27:52.789906630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:27:52.790075 containerd[1706]: time="2025-12-16T12:27:52.789946790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:52.790173 kubelet[2901]: E1216 12:27:52.790135 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:27:52.790249 kubelet[2901]: E1216 12:27:52.790184 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:27:52.790282 kubelet[2901]: E1216 12:27:52.790255 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:52.790430 kubelet[2901]: E1216 12:27:52.790348 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:27:57.058551 containerd[1706]: time="2025-12-16T12:27:57.058459708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:27:57.393905 containerd[1706]: time="2025-12-16T12:27:57.393776909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:57.395687 containerd[1706]: time="2025-12-16T12:27:57.395640515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:27:57.395836 containerd[1706]: time="2025-12-16T12:27:57.395723075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:57.395904 kubelet[2901]: E1216 12:27:57.395863 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:27:57.396702 kubelet[2901]: E1216 12:27:57.395910 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:27:57.396702 kubelet[2901]: E1216 12:27:57.395985 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h72ht_calico-system(5b2b3263-cc70-4a4f-a835-4543e7a31ab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:57.396702 kubelet[2901]: E1216 12:27:57.396016 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:27:59.057461 containerd[1706]: time="2025-12-16T12:27:59.057395111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:27:59.058146 kubelet[2901]: E1216 12:27:59.057944 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:27:59.416415 containerd[1706]: time="2025-12-16T12:27:59.416230868Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:59.417989 containerd[1706]: time="2025-12-16T12:27:59.417929073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:27:59.418060 containerd[1706]: time="2025-12-16T12:27:59.418023274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:59.418302 kubelet[2901]: E1216 12:27:59.418210 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:59.418386 kubelet[2901]: E1216 12:27:59.418343 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:59.418494 kubelet[2901]: E1216 12:27:59.418475 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:59.419442 containerd[1706]: time="2025-12-16T12:27:59.419278758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:27:59.764469 containerd[1706]: time="2025-12-16T12:27:59.764404390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:59.769198 containerd[1706]: time="2025-12-16T12:27:59.769098965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:59.769363 containerd[1706]: time="2025-12-16T12:27:59.769285326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:27:59.769823 kubelet[2901]: E1216 12:27:59.769613 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:59.769823 kubelet[2901]: E1216 12:27:59.769658 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:59.769823 kubelet[2901]: E1216 12:27:59.769739 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:59.769823 kubelet[2901]: E1216 12:27:59.769776 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:28:01.803590 systemd[1]: Started sshd@7-10.0.21.226:22-139.178.68.195:43112.service - OpenSSH per-connection server daemon (139.178.68.195:43112). Dec 16 12:28:01.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.21.226:22-139.178.68.195:43112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:01.804586 kernel: kauditd_printk_skb: 200 callbacks suppressed Dec 16 12:28:01.804661 kernel: audit: type=1130 audit(1765888081.802:737): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.21.226:22-139.178.68.195:43112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:02.056998 kubelet[2901]: E1216 12:28:02.056863 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:28:02.669000 audit[5421]: USER_ACCT pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:02.671505 sshd[5421]: Accepted publickey for core from 139.178.68.195 port 43112 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:02.674000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:02.676663 sshd-session[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:02.679740 kernel: audit: type=1101 audit(1765888082.669:738): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:02.679896 kernel: audit: type=1103 audit(1765888082.674:739): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:02.679931 kernel: audit: type=1006 audit(1765888082.674:740): pid=5421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 16 12:28:02.674000 audit[5421]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdfff4830 a2=3 a3=0 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:02.685775 kernel: audit: type=1300 audit(1765888082.674:740): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdfff4830 a2=3 a3=0 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:02.674000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:02.687508 kernel: audit: type=1327 audit(1765888082.674:740): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:02.691578 systemd-logind[1677]: New session 8 of user core. Dec 16 12:28:02.702799 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:28:02.705000 audit[5421]: USER_START pid=5421 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:02.710323 kernel: audit: type=1105 audit(1765888082.705:741): pid=5421 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:02.709000 audit[5449]: CRED_ACQ pid=5449 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:02.714322 kernel: audit: type=1103 audit(1765888082.709:742): pid=5449 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:03.277199 sshd[5449]: Connection closed by 139.178.68.195 port 43112 Dec 16 12:28:03.277990 sshd-session[5421]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:03.279000 audit[5421]: USER_END pid=5421 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:03.285021 systemd[1]: sshd@7-10.0.21.226:22-139.178.68.195:43112.service: Deactivated successfully. Dec 16 12:28:03.279000 audit[5421]: CRED_DISP pid=5421 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:03.287105 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:28:03.288303 kernel: audit: type=1106 audit(1765888083.279:743): pid=5421 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:03.288514 kernel: audit: type=1104 audit(1765888083.279:744): pid=5421 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:03.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.21.226:22-139.178.68.195:43112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:03.289808 systemd-logind[1677]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:28:03.291329 systemd-logind[1677]: Removed session 8. Dec 16 12:28:04.057871 containerd[1706]: time="2025-12-16T12:28:04.057807828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:28:04.399041 containerd[1706]: time="2025-12-16T12:28:04.398656767Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:04.400705 containerd[1706]: time="2025-12-16T12:28:04.400579013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:28:04.400907 containerd[1706]: time="2025-12-16T12:28:04.400819334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:28:04.401030 kubelet[2901]: E1216 12:28:04.400981 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:04.401372 kubelet[2901]: E1216 12:28:04.401030 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:04.401372 kubelet[2901]: E1216 12:28:04.401109 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-9lrp5_calico-apiserver(06792be6-fad3-4b79-a250-73afb10c06a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:04.401372 kubelet[2901]: E1216 12:28:04.401139 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:28:07.057433 kubelet[2901]: E1216 12:28:07.057359 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:28:08.441806 systemd[1]: Started sshd@8-10.0.21.226:22-139.178.68.195:43118.service - OpenSSH per-connection server daemon (139.178.68.195:43118). Dec 16 12:28:08.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.21.226:22-139.178.68.195:43118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:08.442811 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:28:08.442982 kernel: audit: type=1130 audit(1765888088.440:746): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.21.226:22-139.178.68.195:43118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:09.289000 audit[5487]: USER_ACCT pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.291367 sshd[5487]: Accepted publickey for core from 139.178.68.195 port 43118 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:09.294327 kernel: audit: type=1101 audit(1765888089.289:747): pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.293000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.294915 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:09.299519 kernel: audit: type=1103 audit(1765888089.293:748): pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.299827 kernel: audit: type=1006 audit(1765888089.293:749): pid=5487 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 12:28:09.300419 kernel: audit: type=1300 audit(1765888089.293:749): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffefc0ed80 a2=3 a3=0 items=0 ppid=1 pid=5487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:09.293000 audit[5487]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffefc0ed80 a2=3 a3=0 items=0 ppid=1 pid=5487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:09.293000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:09.304122 kernel: audit: type=1327 audit(1765888089.293:749): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:09.309377 systemd-logind[1677]: New session 9 of user core. Dec 16 12:28:09.319516 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:28:09.320000 audit[5487]: USER_START pid=5487 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.322000 audit[5491]: CRED_ACQ pid=5491 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.328603 kernel: audit: type=1105 audit(1765888089.320:750): pid=5487 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.328950 kernel: audit: type=1103 audit(1765888089.322:751): pid=5491 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.835782 sshd[5491]: Connection closed by 139.178.68.195 port 43118 Dec 16 12:28:09.836508 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:09.837000 audit[5487]: USER_END pid=5487 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.842787 systemd[1]: sshd@8-10.0.21.226:22-139.178.68.195:43118.service: Deactivated successfully. Dec 16 12:28:09.837000 audit[5487]: CRED_DISP pid=5487 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.844603 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:28:09.846000 kernel: audit: type=1106 audit(1765888089.837:752): pid=5487 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.846104 kernel: audit: type=1104 audit(1765888089.837:753): pid=5487 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:09.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.21.226:22-139.178.68.195:43118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:09.846667 systemd-logind[1677]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:28:09.848759 systemd-logind[1677]: Removed session 9. Dec 16 12:28:10.032594 systemd[1]: Started sshd@9-10.0.21.226:22-139.178.68.195:43132.service - OpenSSH per-connection server daemon (139.178.68.195:43132). Dec 16 12:28:10.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.21.226:22-139.178.68.195:43132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:10.060136 kubelet[2901]: E1216 12:28:10.060043 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:28:10.921906 sshd[5506]: Accepted publickey for core from 139.178.68.195 port 43132 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:10.920000 audit[5506]: USER_ACCT pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:10.922000 audit[5506]: CRED_ACQ pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:10.922000 audit[5506]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7f827e0 a2=3 a3=0 items=0 ppid=1 pid=5506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:10.922000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:10.924255 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:10.930802 systemd-logind[1677]: New session 10 of user core. Dec 16 12:28:10.941500 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:28:10.942000 audit[5506]: USER_START pid=5506 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:10.944000 audit[5509]: CRED_ACQ pid=5509 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:11.057326 kubelet[2901]: E1216 12:28:11.057138 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:28:11.529309 sshd[5509]: Connection closed by 139.178.68.195 port 43132 Dec 16 12:28:11.529378 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:11.530000 audit[5506]: USER_END pid=5506 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:11.530000 audit[5506]: CRED_DISP pid=5506 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:11.535848 systemd[1]: sshd@9-10.0.21.226:22-139.178.68.195:43132.service: Deactivated successfully. Dec 16 12:28:11.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.21.226:22-139.178.68.195:43132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:11.539060 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:28:11.546934 systemd-logind[1677]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:28:11.548428 systemd-logind[1677]: Removed session 10. Dec 16 12:28:11.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.21.226:22-139.178.68.195:40206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:11.713531 systemd[1]: Started sshd@10-10.0.21.226:22-139.178.68.195:40206.service - OpenSSH per-connection server daemon (139.178.68.195:40206). Dec 16 12:28:12.612000 audit[5520]: USER_ACCT pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:12.613766 sshd[5520]: Accepted publickey for core from 139.178.68.195 port 40206 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:12.613000 audit[5520]: CRED_ACQ pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:12.613000 audit[5520]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8490e20 a2=3 a3=0 items=0 ppid=1 pid=5520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:12.613000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:12.615095 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:12.619317 systemd-logind[1677]: New session 11 of user core. Dec 16 12:28:12.625554 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:28:12.627000 audit[5520]: USER_START pid=5520 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:12.628000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:13.213006 sshd[5523]: Connection closed by 139.178.68.195 port 40206 Dec 16 12:28:13.213423 sshd-session[5520]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:13.213000 audit[5520]: USER_END pid=5520 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:13.213000 audit[5520]: CRED_DISP pid=5520 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:13.217634 systemd[1]: sshd@10-10.0.21.226:22-139.178.68.195:40206.service: Deactivated successfully. Dec 16 12:28:13.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.21.226:22-139.178.68.195:40206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:13.219593 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:28:13.220941 systemd-logind[1677]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:28:13.221760 systemd-logind[1677]: Removed session 11. Dec 16 12:28:14.057687 kubelet[2901]: E1216 12:28:14.057614 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:28:16.058058 kubelet[2901]: E1216 12:28:16.057914 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:28:18.058789 kubelet[2901]: E1216 12:28:18.058714 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:28:18.060675 kubelet[2901]: E1216 12:28:18.060595 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:28:18.413808 systemd[1]: Started sshd@11-10.0.21.226:22-139.178.68.195:40222.service - OpenSSH per-connection server daemon (139.178.68.195:40222). Dec 16 12:28:18.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.21.226:22-139.178.68.195:40222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:18.414988 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:28:18.415056 kernel: audit: type=1130 audit(1765888098.412:773): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.21.226:22-139.178.68.195:40222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:19.333000 audit[5542]: USER_ACCT pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.335544 sshd[5542]: Accepted publickey for core from 139.178.68.195 port 40222 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:19.337334 sshd-session[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:19.335000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.341874 kernel: audit: type=1101 audit(1765888099.333:774): pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.341962 kernel: audit: type=1103 audit(1765888099.335:775): pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.343870 kernel: audit: type=1006 audit(1765888099.335:776): pid=5542 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 12:28:19.335000 audit[5542]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffdd92e90 a2=3 a3=0 items=0 ppid=1 pid=5542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:19.347351 kernel: audit: type=1300 audit(1765888099.335:776): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffdd92e90 a2=3 a3=0 items=0 ppid=1 pid=5542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:19.335000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:19.348791 kernel: audit: type=1327 audit(1765888099.335:776): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:19.351838 systemd-logind[1677]: New session 12 of user core. Dec 16 12:28:19.360743 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:28:19.361000 audit[5542]: USER_START pid=5542 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.363000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.369830 kernel: audit: type=1105 audit(1765888099.361:777): pid=5542 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.369932 kernel: audit: type=1103 audit(1765888099.363:778): pid=5545 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.923918 sshd[5545]: Connection closed by 139.178.68.195 port 40222 Dec 16 12:28:19.924766 sshd-session[5542]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:19.924000 audit[5542]: USER_END pid=5542 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.928967 systemd[1]: sshd@11-10.0.21.226:22-139.178.68.195:40222.service: Deactivated successfully. Dec 16 12:28:19.924000 audit[5542]: CRED_DISP pid=5542 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.932894 kernel: audit: type=1106 audit(1765888099.924:779): pid=5542 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.932956 kernel: audit: type=1104 audit(1765888099.924:780): pid=5542 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:19.933013 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:28:19.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.21.226:22-139.178.68.195:40222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:19.936886 systemd-logind[1677]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:28:19.938458 systemd-logind[1677]: Removed session 12. Dec 16 12:28:22.057568 kubelet[2901]: E1216 12:28:22.057505 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:28:23.057612 kubelet[2901]: E1216 12:28:23.057521 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:28:25.112472 systemd[1]: Started sshd@12-10.0.21.226:22-139.178.68.195:57872.service - OpenSSH per-connection server daemon (139.178.68.195:57872). Dec 16 12:28:25.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.21.226:22-139.178.68.195:57872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:25.113567 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:28:25.113630 kernel: audit: type=1130 audit(1765888105.111:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.21.226:22-139.178.68.195:57872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:26.032000 audit[5558]: USER_ACCT pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.037586 sshd[5558]: Accepted publickey for core from 139.178.68.195 port 57872 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:26.038338 kernel: audit: type=1101 audit(1765888106.032:783): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.037000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.039080 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:26.044556 kernel: audit: type=1103 audit(1765888106.037:784): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.044656 kernel: audit: type=1006 audit(1765888106.037:785): pid=5558 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 12:28:26.037000 audit[5558]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc3115670 a2=3 a3=0 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:26.048329 kernel: audit: type=1300 audit(1765888106.037:785): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc3115670 a2=3 a3=0 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:26.037000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:26.049696 kernel: audit: type=1327 audit(1765888106.037:785): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:26.053480 systemd-logind[1677]: New session 13 of user core. Dec 16 12:28:26.058310 kubelet[2901]: E1216 12:28:26.058232 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:28:26.060522 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:28:26.062000 audit[5558]: USER_START pid=5558 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.067000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.071301 kernel: audit: type=1105 audit(1765888106.062:786): pid=5558 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.071383 kernel: audit: type=1103 audit(1765888106.067:787): pid=5561 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.633374 sshd[5561]: Connection closed by 139.178.68.195 port 57872 Dec 16 12:28:26.632844 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:26.632000 audit[5558]: USER_END pid=5558 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.633000 audit[5558]: CRED_DISP pid=5558 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.638671 systemd[1]: sshd@12-10.0.21.226:22-139.178.68.195:57872.service: Deactivated successfully. Dec 16 12:28:26.640991 kernel: audit: type=1106 audit(1765888106.632:788): pid=5558 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.641108 kernel: audit: type=1104 audit(1765888106.633:789): pid=5558 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:26.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.21.226:22-139.178.68.195:57872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:26.641653 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:28:26.642867 systemd-logind[1677]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:28:26.644432 systemd-logind[1677]: Removed session 13. Dec 16 12:28:27.057137 kubelet[2901]: E1216 12:28:27.057081 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:28:30.058927 kubelet[2901]: E1216 12:28:30.058860 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:28:31.811911 systemd[1]: Started sshd@13-10.0.21.226:22-139.178.68.195:51212.service - OpenSSH per-connection server daemon (139.178.68.195:51212). Dec 16 12:28:31.812990 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:28:31.813022 kernel: audit: type=1130 audit(1765888111.810:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.21.226:22-139.178.68.195:51212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:31.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.21.226:22-139.178.68.195:51212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:32.701000 audit[5575]: USER_ACCT pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:32.702998 sshd[5575]: Accepted publickey for core from 139.178.68.195 port 51212 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:32.707369 kernel: audit: type=1101 audit(1765888112.701:792): pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:32.706000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:32.708188 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:32.713650 kernel: audit: type=1103 audit(1765888112.706:793): pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:32.714196 kernel: audit: type=1006 audit(1765888112.706:794): pid=5575 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 12:28:32.714229 kernel: audit: type=1300 audit(1765888112.706:794): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd8e6940 a2=3 a3=0 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:32.706000 audit[5575]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd8e6940 a2=3 a3=0 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:32.718084 systemd-logind[1677]: New session 14 of user core. Dec 16 12:28:32.706000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:32.719385 kernel: audit: type=1327 audit(1765888112.706:794): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:32.724504 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:28:32.727000 audit[5575]: USER_START pid=5575 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:32.729000 audit[5603]: CRED_ACQ pid=5603 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:32.736076 kernel: audit: type=1105 audit(1765888112.727:795): pid=5575 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:32.736814 kernel: audit: type=1103 audit(1765888112.729:796): pid=5603 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:33.057894 kubelet[2901]: E1216 12:28:33.057820 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:28:33.058639 kubelet[2901]: E1216 12:28:33.058315 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:28:33.286315 sshd[5603]: Connection closed by 139.178.68.195 port 51212 Dec 16 12:28:33.286829 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:33.286000 audit[5575]: USER_END pid=5575 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:33.291053 systemd-logind[1677]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:28:33.291336 systemd[1]: sshd@13-10.0.21.226:22-139.178.68.195:51212.service: Deactivated successfully. Dec 16 12:28:33.287000 audit[5575]: CRED_DISP pid=5575 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:33.293445 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:28:33.295064 kernel: audit: type=1106 audit(1765888113.286:797): pid=5575 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:33.295130 kernel: audit: type=1104 audit(1765888113.287:798): pid=5575 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:33.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.21.226:22-139.178.68.195:51212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:33.295235 systemd-logind[1677]: Removed session 14. Dec 16 12:28:33.453151 systemd[1]: Started sshd@14-10.0.21.226:22-139.178.68.195:51222.service - OpenSSH per-connection server daemon (139.178.68.195:51222). Dec 16 12:28:33.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.21.226:22-139.178.68.195:51222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:34.059137 kubelet[2901]: E1216 12:28:34.059048 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:28:34.291000 audit[5616]: USER_ACCT pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:34.292968 sshd[5616]: Accepted publickey for core from 139.178.68.195 port 51222 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:34.292000 audit[5616]: CRED_ACQ pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:34.292000 audit[5616]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd8de630 a2=3 a3=0 items=0 ppid=1 pid=5616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:34.292000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:34.294487 sshd-session[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:34.302010 systemd-logind[1677]: New session 15 of user core. Dec 16 12:28:34.309683 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:28:34.311000 audit[5616]: USER_START pid=5616 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:34.313000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:34.895414 sshd[5619]: Connection closed by 139.178.68.195 port 51222 Dec 16 12:28:34.895821 sshd-session[5616]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:34.895000 audit[5616]: USER_END pid=5616 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:34.895000 audit[5616]: CRED_DISP pid=5616 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:34.902093 systemd[1]: sshd@14-10.0.21.226:22-139.178.68.195:51222.service: Deactivated successfully. Dec 16 12:28:34.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.21.226:22-139.178.68.195:51222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:34.906762 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:28:34.908024 systemd-logind[1677]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:28:34.914539 systemd-logind[1677]: Removed session 15. Dec 16 12:28:35.068118 systemd[1]: Started sshd@15-10.0.21.226:22-139.178.68.195:51224.service - OpenSSH per-connection server daemon (139.178.68.195:51224). Dec 16 12:28:35.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.21.226:22-139.178.68.195:51224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:35.937000 audit[5630]: USER_ACCT pid=5630 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:35.939006 sshd[5630]: Accepted publickey for core from 139.178.68.195 port 51224 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:35.938000 audit[5630]: CRED_ACQ pid=5630 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:35.939000 audit[5630]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1c0d500 a2=3 a3=0 items=0 ppid=1 pid=5630 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:35.939000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:35.941338 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:35.948128 systemd-logind[1677]: New session 16 of user core. Dec 16 12:28:35.961634 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:28:35.963000 audit[5630]: USER_START pid=5630 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:35.966000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:36.705000 audit[5646]: NETFILTER_CFG table=filter:131 family=2 entries=26 op=nft_register_rule pid=5646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:28:36.705000 audit[5646]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe3c87260 a2=0 a3=1 items=0 ppid=3030 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:36.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:28:36.713000 audit[5646]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5646 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:28:36.713000 audit[5646]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe3c87260 a2=0 a3=1 items=0 ppid=3030 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:36.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:28:36.873390 sshd[5634]: Connection closed by 139.178.68.195 port 51224 Dec 16 12:28:36.873884 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:36.874000 audit[5630]: USER_END pid=5630 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:36.878752 systemd[1]: sshd@15-10.0.21.226:22-139.178.68.195:51224.service: Deactivated successfully. Dec 16 12:28:36.879803 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 16 12:28:36.879853 kernel: audit: type=1106 audit(1765888116.874:817): pid=5630 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:36.874000 audit[5630]: CRED_DISP pid=5630 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:36.882065 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:28:36.883141 kernel: audit: type=1104 audit(1765888116.874:818): pid=5630 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:36.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.21.226:22-139.178.68.195:51224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:36.886487 kernel: audit: type=1131 audit(1765888116.878:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.21.226:22-139.178.68.195:51224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:36.886933 systemd-logind[1677]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:28:36.888097 systemd-logind[1677]: Removed session 16. Dec 16 12:28:37.053695 systemd[1]: Started sshd@16-10.0.21.226:22-139.178.68.195:51236.service - OpenSSH per-connection server daemon (139.178.68.195:51236). Dec 16 12:28:37.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.21.226:22-139.178.68.195:51236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:37.057331 kernel: audit: type=1130 audit(1765888117.052:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.21.226:22-139.178.68.195:51236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:37.731000 audit[5655]: NETFILTER_CFG table=filter:133 family=2 entries=38 op=nft_register_rule pid=5655 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:28:37.731000 audit[5655]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff05cf380 a2=0 a3=1 items=0 ppid=3030 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:37.738188 kernel: audit: type=1325 audit(1765888117.731:821): table=filter:133 family=2 entries=38 op=nft_register_rule pid=5655 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:28:37.738241 kernel: audit: type=1300 audit(1765888117.731:821): arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff05cf380 a2=0 a3=1 items=0 ppid=3030 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:37.738315 kernel: audit: type=1327 audit(1765888117.731:821): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:28:37.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:28:37.739000 audit[5655]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5655 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:28:37.739000 audit[5655]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff05cf380 a2=0 a3=1 items=0 ppid=3030 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:37.745924 kernel: audit: type=1325 audit(1765888117.739:822): table=nat:134 family=2 entries=20 op=nft_register_rule pid=5655 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:28:37.746035 kernel: audit: type=1300 audit(1765888117.739:822): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff05cf380 a2=0 a3=1 items=0 ppid=3030 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:37.746060 kernel: audit: type=1327 audit(1765888117.739:822): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:28:37.739000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:28:37.913000 audit[5651]: USER_ACCT pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:37.915017 sshd[5651]: Accepted publickey for core from 139.178.68.195 port 51236 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:37.914000 audit[5651]: CRED_ACQ pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:37.914000 audit[5651]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff913e9e0 a2=3 a3=0 items=0 ppid=1 pid=5651 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:37.914000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:37.916104 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:37.920174 systemd-logind[1677]: New session 17 of user core. Dec 16 12:28:37.932511 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:28:37.933000 audit[5651]: USER_START pid=5651 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:37.935000 audit[5656]: CRED_ACQ pid=5656 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:38.594322 sshd[5656]: Connection closed by 139.178.68.195 port 51236 Dec 16 12:28:38.592508 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:38.592000 audit[5651]: USER_END pid=5651 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:38.592000 audit[5651]: CRED_DISP pid=5651 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:38.598623 systemd-logind[1677]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:28:38.600061 systemd[1]: sshd@16-10.0.21.226:22-139.178.68.195:51236.service: Deactivated successfully. Dec 16 12:28:38.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.21.226:22-139.178.68.195:51236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:38.608253 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:28:38.609699 systemd-logind[1677]: Removed session 17. Dec 16 12:28:38.760270 systemd[1]: Started sshd@17-10.0.21.226:22-139.178.68.195:51240.service - OpenSSH per-connection server daemon (139.178.68.195:51240). Dec 16 12:28:38.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.21.226:22-139.178.68.195:51240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:39.057510 kubelet[2901]: E1216 12:28:39.057456 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:28:39.608000 audit[5668]: USER_ACCT pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:39.609651 sshd[5668]: Accepted publickey for core from 139.178.68.195 port 51240 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:39.609000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:39.609000 audit[5668]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2c9fb40 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:39.609000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:39.611037 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:39.616335 systemd-logind[1677]: New session 18 of user core. Dec 16 12:28:39.622490 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:28:39.623000 audit[5668]: USER_START pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:39.625000 audit[5671]: CRED_ACQ pid=5671 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:40.057425 kubelet[2901]: E1216 12:28:40.057269 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:28:40.155335 sshd[5671]: Connection closed by 139.178.68.195 port 51240 Dec 16 12:28:40.155221 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:40.156000 audit[5668]: USER_END pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:40.156000 audit[5668]: CRED_DISP pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:40.160609 systemd[1]: sshd@17-10.0.21.226:22-139.178.68.195:51240.service: Deactivated successfully. Dec 16 12:28:40.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.21.226:22-139.178.68.195:51240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:40.165005 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:28:40.169328 systemd-logind[1677]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:28:40.172012 systemd-logind[1677]: Removed session 18. Dec 16 12:28:40.179000 audit[5687]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:28:40.179000 audit[5687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd5ca6ce0 a2=0 a3=1 items=0 ppid=3030 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:40.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:28:40.187000 audit[5687]: NETFILTER_CFG table=nat:136 family=2 entries=104 op=nft_register_chain pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:28:40.187000 audit[5687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd5ca6ce0 a2=0 a3=1 items=0 ppid=3030 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:40.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:28:41.058675 kubelet[2901]: E1216 12:28:41.058545 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:28:45.057989 kubelet[2901]: E1216 12:28:45.057889 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:28:45.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.21.226:22-139.178.68.195:43014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:45.335352 systemd[1]: Started sshd@18-10.0.21.226:22-139.178.68.195:43014.service - OpenSSH per-connection server daemon (139.178.68.195:43014). Dec 16 12:28:45.338763 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 12:28:45.338836 kernel: audit: type=1130 audit(1765888125.334:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.21.226:22-139.178.68.195:43014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:46.182000 audit[5690]: USER_ACCT pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.183851 sshd[5690]: Accepted publickey for core from 139.178.68.195 port 43014 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:46.185921 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:46.184000 audit[5690]: CRED_ACQ pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.190228 kernel: audit: type=1101 audit(1765888126.182:843): pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.190458 kernel: audit: type=1103 audit(1765888126.184:844): pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.192272 kernel: audit: type=1006 audit(1765888126.184:845): pid=5690 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 12:28:46.184000 audit[5690]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd960a090 a2=3 a3=0 items=0 ppid=1 pid=5690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:46.194399 systemd-logind[1677]: New session 19 of user core. Dec 16 12:28:46.195707 kernel: audit: type=1300 audit(1765888126.184:845): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd960a090 a2=3 a3=0 items=0 ppid=1 pid=5690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:46.195769 kernel: audit: type=1327 audit(1765888126.184:845): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:46.184000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:46.205467 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:28:46.206000 audit[5690]: USER_START pid=5690 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.207000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.214301 kernel: audit: type=1105 audit(1765888126.206:846): pid=5690 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.214369 kernel: audit: type=1103 audit(1765888126.207:847): pid=5693 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.754247 sshd[5693]: Connection closed by 139.178.68.195 port 43014 Dec 16 12:28:46.754790 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:46.754000 audit[5690]: USER_END pid=5690 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.759009 systemd[1]: sshd@18-10.0.21.226:22-139.178.68.195:43014.service: Deactivated successfully. Dec 16 12:28:46.754000 audit[5690]: CRED_DISP pid=5690 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.762066 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:28:46.762736 kernel: audit: type=1106 audit(1765888126.754:848): pid=5690 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.762804 kernel: audit: type=1104 audit(1765888126.754:849): pid=5690 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:46.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.21.226:22-139.178.68.195:43014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:46.766481 systemd-logind[1677]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:28:46.767664 systemd-logind[1677]: Removed session 19. Dec 16 12:28:47.058777 kubelet[2901]: E1216 12:28:47.056737 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:28:48.057354 kubelet[2901]: E1216 12:28:48.057281 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:28:51.934095 systemd[1]: Started sshd@19-10.0.21.226:22-139.178.68.195:32900.service - OpenSSH per-connection server daemon (139.178.68.195:32900). Dec 16 12:28:51.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.21.226:22-139.178.68.195:32900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:51.937991 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:28:51.938084 kernel: audit: type=1130 audit(1765888131.933:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.21.226:22-139.178.68.195:32900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:52.799434 sshd[5708]: Accepted publickey for core from 139.178.68.195 port 32900 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:52.798000 audit[5708]: USER_ACCT pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:52.803264 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:52.803538 kernel: audit: type=1101 audit(1765888132.798:852): pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:52.803589 kernel: audit: type=1103 audit(1765888132.801:853): pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:52.801000 audit[5708]: CRED_ACQ pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:52.808436 kernel: audit: type=1006 audit(1765888132.801:854): pid=5708 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 16 12:28:52.801000 audit[5708]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff42699d0 a2=3 a3=0 items=0 ppid=1 pid=5708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:52.812120 kernel: audit: type=1300 audit(1765888132.801:854): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff42699d0 a2=3 a3=0 items=0 ppid=1 pid=5708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:52.801000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:52.813411 kernel: audit: type=1327 audit(1765888132.801:854): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:52.816240 systemd-logind[1677]: New session 20 of user core. Dec 16 12:28:52.826507 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:28:52.828000 audit[5708]: USER_START pid=5708 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:52.828000 audit[5711]: CRED_ACQ pid=5711 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:52.835845 kernel: audit: type=1105 audit(1765888132.828:855): pid=5708 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:52.836094 kernel: audit: type=1103 audit(1765888132.828:856): pid=5711 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:53.359457 sshd[5711]: Connection closed by 139.178.68.195 port 32900 Dec 16 12:28:53.359994 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:53.359000 audit[5708]: USER_END pid=5708 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:53.364051 systemd[1]: sshd@19-10.0.21.226:22-139.178.68.195:32900.service: Deactivated successfully. Dec 16 12:28:53.360000 audit[5708]: CRED_DISP pid=5708 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:53.366286 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:28:53.368147 kernel: audit: type=1106 audit(1765888133.359:857): pid=5708 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:53.370395 kernel: audit: type=1104 audit(1765888133.360:858): pid=5708 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:53.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.21.226:22-139.178.68.195:32900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:53.372581 systemd-logind[1677]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:28:53.373768 systemd-logind[1677]: Removed session 20. Dec 16 12:28:54.057270 kubelet[2901]: E1216 12:28:54.056934 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:28:54.058634 kubelet[2901]: E1216 12:28:54.057535 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:28:56.057738 kubelet[2901]: E1216 12:28:56.057651 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:28:57.057809 kubelet[2901]: E1216 12:28:57.057753 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:28:58.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.21.226:22-139.178.68.195:32902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:58.532028 systemd[1]: Started sshd@20-10.0.21.226:22-139.178.68.195:32902.service - OpenSSH per-connection server daemon (139.178.68.195:32902). Dec 16 12:28:58.533118 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:28:58.533172 kernel: audit: type=1130 audit(1765888138.530:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.21.226:22-139.178.68.195:32902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:59.057162 kubelet[2901]: E1216 12:28:59.056591 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:28:59.378000 audit[5724]: USER_ACCT pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.380520 sshd[5724]: Accepted publickey for core from 139.178.68.195 port 32902 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:28:59.382000 audit[5724]: CRED_ACQ pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.384211 sshd-session[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:59.386736 kernel: audit: type=1101 audit(1765888139.378:861): pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.386796 kernel: audit: type=1103 audit(1765888139.382:862): pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.386851 kernel: audit: type=1006 audit(1765888139.382:863): pid=5724 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 12:28:59.382000 audit[5724]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc963a670 a2=3 a3=0 items=0 ppid=1 pid=5724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:59.391616 kernel: audit: type=1300 audit(1765888139.382:863): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc963a670 a2=3 a3=0 items=0 ppid=1 pid=5724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:28:59.382000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:59.392940 kernel: audit: type=1327 audit(1765888139.382:863): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:28:59.395035 systemd-logind[1677]: New session 21 of user core. Dec 16 12:28:59.402475 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:28:59.404000 audit[5724]: USER_START pid=5724 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.406000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.412500 kernel: audit: type=1105 audit(1765888139.404:864): pid=5724 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.412567 kernel: audit: type=1103 audit(1765888139.406:865): pid=5728 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.934730 sshd[5728]: Connection closed by 139.178.68.195 port 32902 Dec 16 12:28:59.935367 sshd-session[5724]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:59.935000 audit[5724]: USER_END pid=5724 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.939938 systemd[1]: sshd@20-10.0.21.226:22-139.178.68.195:32902.service: Deactivated successfully. Dec 16 12:28:59.936000 audit[5724]: CRED_DISP pid=5724 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.942895 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:28:59.944009 kernel: audit: type=1106 audit(1765888139.935:866): pid=5724 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.944081 kernel: audit: type=1104 audit(1765888139.936:867): pid=5724 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:28:59.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.21.226:22-139.178.68.195:32902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:28:59.945826 systemd-logind[1677]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:28:59.947525 systemd-logind[1677]: Removed session 21. Dec 16 12:29:01.056850 kubelet[2901]: E1216 12:29:01.056784 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:29:05.057485 kubelet[2901]: E1216 12:29:05.057438 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:29:05.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.21.226:22-139.178.68.195:36864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:29:05.129977 systemd[1]: Started sshd@21-10.0.21.226:22-139.178.68.195:36864.service - OpenSSH per-connection server daemon (139.178.68.195:36864). Dec 16 12:29:05.130854 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:29:05.130904 kernel: audit: type=1130 audit(1765888145.128:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.21.226:22-139.178.68.195:36864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:29:06.020000 audit[5767]: USER_ACCT pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.022132 sshd[5767]: Accepted publickey for core from 139.178.68.195 port 36864 ssh2: RSA SHA256:N7ajpgMoYx0vOiVmK5+QnVX4Z+PaVqfMpOuN3iZB1Fo Dec 16 12:29:06.024000 audit[5767]: CRED_ACQ pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.026611 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:06.029840 kernel: audit: type=1101 audit(1765888146.020:870): pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.029995 kernel: audit: type=1103 audit(1765888146.024:871): pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.030019 kernel: audit: type=1006 audit(1765888146.024:872): pid=5767 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 12:29:06.024000 audit[5767]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6cbdd70 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:06.035343 systemd-logind[1677]: New session 22 of user core. Dec 16 12:29:06.035951 kernel: audit: type=1300 audit(1765888146.024:872): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6cbdd70 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:06.024000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:29:06.037453 kernel: audit: type=1327 audit(1765888146.024:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:29:06.042517 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:29:06.044000 audit[5767]: USER_START pid=5767 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.044000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.053321 kernel: audit: type=1105 audit(1765888146.044:873): pid=5767 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.053409 kernel: audit: type=1103 audit(1765888146.044:874): pid=5770 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.056612 kubelet[2901]: E1216 12:29:06.056556 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:29:06.601790 sshd[5770]: Connection closed by 139.178.68.195 port 36864 Dec 16 12:29:06.602978 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:06.603000 audit[5767]: USER_END pid=5767 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.607886 systemd-logind[1677]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:29:06.608134 systemd[1]: sshd@21-10.0.21.226:22-139.178.68.195:36864.service: Deactivated successfully. Dec 16 12:29:06.603000 audit[5767]: CRED_DISP pid=5767 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.609873 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:29:06.611708 systemd-logind[1677]: Removed session 22. Dec 16 12:29:06.612784 kernel: audit: type=1106 audit(1765888146.603:875): pid=5767 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.612842 kernel: audit: type=1104 audit(1765888146.603:876): pid=5767 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:29:06.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.21.226:22-139.178.68.195:36864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:29:08.061042 kubelet[2901]: E1216 12:29:08.060969 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:29:12.061927 kubelet[2901]: E1216 12:29:12.061882 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:29:12.063329 kubelet[2901]: E1216 12:29:12.062643 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:29:13.057862 kubelet[2901]: E1216 12:29:13.057284 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:29:16.062386 containerd[1706]: time="2025-12-16T12:29:16.061924349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:16.454505 containerd[1706]: time="2025-12-16T12:29:16.454240814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:16.456182 containerd[1706]: time="2025-12-16T12:29:16.456126580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:16.456283 containerd[1706]: time="2025-12-16T12:29:16.456198460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:29:16.456408 kubelet[2901]: E1216 12:29:16.456364 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:16.456878 kubelet[2901]: E1216 12:29:16.456411 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:16.456878 kubelet[2901]: E1216 12:29:16.456492 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-62gh6_calico-apiserver(ea12a60c-4683-4d5e-8e8f-9b466a85a781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:16.456878 kubelet[2901]: E1216 12:29:16.456522 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:29:21.057334 containerd[1706]: time="2025-12-16T12:29:21.057277330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:29:21.406848 containerd[1706]: time="2025-12-16T12:29:21.406701056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:21.408204 containerd[1706]: time="2025-12-16T12:29:21.408145861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:29:21.408313 containerd[1706]: time="2025-12-16T12:29:21.408180381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:29:21.408480 kubelet[2901]: E1216 12:29:21.408432 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:21.408804 kubelet[2901]: E1216 12:29:21.408482 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:21.408804 kubelet[2901]: E1216 12:29:21.408553 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-758f6dbc5-vnxvc_calico-system(f8c50491-6041-443b-aedf-13a9fee1a718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:21.408804 kubelet[2901]: E1216 12:29:21.408583 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:29:22.058881 containerd[1706]: time="2025-12-16T12:29:22.058823198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:29:22.402232 containerd[1706]: time="2025-12-16T12:29:22.401218382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:22.403159 containerd[1706]: time="2025-12-16T12:29:22.403048308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:29:22.403159 containerd[1706]: time="2025-12-16T12:29:22.403090388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:29:22.403319 kubelet[2901]: E1216 12:29:22.403264 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:22.403406 kubelet[2901]: E1216 12:29:22.403326 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:22.403438 kubelet[2901]: E1216 12:29:22.403399 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:22.404401 containerd[1706]: time="2025-12-16T12:29:22.404326032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:29:22.752400 containerd[1706]: time="2025-12-16T12:29:22.752325794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:22.754124 containerd[1706]: time="2025-12-16T12:29:22.754078519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:29:22.754181 containerd[1706]: time="2025-12-16T12:29:22.754120959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:29:22.754384 kubelet[2901]: E1216 12:29:22.754343 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:22.754643 kubelet[2901]: E1216 12:29:22.754395 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:22.754643 kubelet[2901]: E1216 12:29:22.754465 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6c89f55df9-rhjdd_calico-system(392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:22.754643 kubelet[2901]: E1216 12:29:22.754503 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:29:25.057701 containerd[1706]: time="2025-12-16T12:29:25.057606664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:25.614679 containerd[1706]: time="2025-12-16T12:29:25.614576779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:25.616043 containerd[1706]: time="2025-12-16T12:29:25.615993944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:25.616106 containerd[1706]: time="2025-12-16T12:29:25.616051104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:29:25.616277 kubelet[2901]: E1216 12:29:25.616231 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:25.616563 kubelet[2901]: E1216 12:29:25.616303 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:25.616563 kubelet[2901]: E1216 12:29:25.616380 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65dbdbb8c6-9lrp5_calico-apiserver(06792be6-fad3-4b79-a250-73afb10c06a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:25.616563 kubelet[2901]: E1216 12:29:25.616410 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:29:26.057407 containerd[1706]: time="2025-12-16T12:29:26.057217606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:29:26.426345 containerd[1706]: time="2025-12-16T12:29:26.426087835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:26.428523 containerd[1706]: time="2025-12-16T12:29:26.428453202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:29:26.428724 containerd[1706]: time="2025-12-16T12:29:26.428501443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:29:26.428822 kubelet[2901]: E1216 12:29:26.428684 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:26.428822 kubelet[2901]: E1216 12:29:26.428727 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:26.428822 kubelet[2901]: E1216 12:29:26.428804 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h72ht_calico-system(5b2b3263-cc70-4a4f-a835-4543e7a31ab8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:26.428958 kubelet[2901]: E1216 12:29:26.428833 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:29:27.057168 kubelet[2901]: E1216 12:29:27.057111 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:29:27.057551 containerd[1706]: time="2025-12-16T12:29:27.057312789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:29:27.403286 containerd[1706]: time="2025-12-16T12:29:27.403167384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:27.404922 containerd[1706]: time="2025-12-16T12:29:27.404881030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:29:27.405024 containerd[1706]: time="2025-12-16T12:29:27.404961110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:29:27.405159 kubelet[2901]: E1216 12:29:27.405124 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:27.405216 kubelet[2901]: E1216 12:29:27.405170 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:27.405268 kubelet[2901]: E1216 12:29:27.405248 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:27.406107 containerd[1706]: time="2025-12-16T12:29:27.406048393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:29:27.746183 containerd[1706]: time="2025-12-16T12:29:27.746077409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:27.747985 containerd[1706]: time="2025-12-16T12:29:27.747931015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:29:27.748071 containerd[1706]: time="2025-12-16T12:29:27.748023896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:29:27.748255 kubelet[2901]: E1216 12:29:27.748209 2901 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:27.748311 kubelet[2901]: E1216 12:29:27.748253 2901 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:27.748379 kubelet[2901]: E1216 12:29:27.748348 2901 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-k8fpw_calico-system(1da2d440-02fc-4a40-abf1-80ffcd9275c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:27.748417 kubelet[2901]: E1216 12:29:27.748392 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:29:33.056559 kubelet[2901]: E1216 12:29:33.056490 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:29:34.861560 kubelet[2901]: E1216 12:29:34.861415 2901 controller.go:195] "Failed to update lease" err="Put \"https://10.0.21.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-7-179ea8c226?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:29:35.040075 kubelet[2901]: E1216 12:29:35.039791 2901 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.21.226:40040->10.0.21.243:2379: read: connection timed out" Dec 16 12:29:35.677677 systemd[1]: cri-containerd-c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10.scope: Deactivated successfully. Dec 16 12:29:35.678170 systemd[1]: cri-containerd-c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10.scope: Consumed 3.928s CPU time, 61.4M memory peak. Dec 16 12:29:35.677000 audit: BPF prog-id=256 op=LOAD Dec 16 12:29:35.679912 containerd[1706]: time="2025-12-16T12:29:35.679667781Z" level=info msg="received container exit event container_id:\"c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10\" id:\"c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10\" pid:2765 exit_status:1 exited_at:{seconds:1765888175 nanos:679308859}" Dec 16 12:29:35.680154 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:29:35.680192 kernel: audit: type=1334 audit(1765888175.677:878): prog-id=256 op=LOAD Dec 16 12:29:35.677000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:29:35.681927 kernel: audit: type=1334 audit(1765888175.677:879): prog-id=93 op=UNLOAD Dec 16 12:29:35.681000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:29:35.681000 audit: BPF prog-id=112 op=UNLOAD Dec 16 12:29:35.684391 kernel: audit: type=1334 audit(1765888175.681:880): prog-id=108 op=UNLOAD Dec 16 12:29:35.684420 kernel: audit: type=1334 audit(1765888175.681:881): prog-id=112 op=UNLOAD Dec 16 12:29:35.701868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10-rootfs.mount: Deactivated successfully. Dec 16 12:29:36.062058 kubelet[2901]: E1216 12:29:36.061925 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e" Dec 16 12:29:36.240858 systemd[1]: cri-containerd-7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594.scope: Deactivated successfully. Dec 16 12:29:36.241201 systemd[1]: cri-containerd-7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594.scope: Consumed 36.789s CPU time, 104.1M memory peak. Dec 16 12:29:36.242571 containerd[1706]: time="2025-12-16T12:29:36.242531195Z" level=info msg="received container exit event container_id:\"7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594\" id:\"7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594\" pid:3250 exit_status:1 exited_at:{seconds:1765888176 nanos:242239514}" Dec 16 12:29:36.248000 audit: BPF prog-id=146 op=UNLOAD Dec 16 12:29:36.248000 audit: BPF prog-id=150 op=UNLOAD Dec 16 12:29:36.251471 kernel: audit: type=1334 audit(1765888176.248:882): prog-id=146 op=UNLOAD Dec 16 12:29:36.251532 kernel: audit: type=1334 audit(1765888176.248:883): prog-id=150 op=UNLOAD Dec 16 12:29:36.263716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594-rootfs.mount: Deactivated successfully. Dec 16 12:29:36.673336 kubelet[2901]: I1216 12:29:36.673125 2901 scope.go:117] "RemoveContainer" containerID="c6c2bb32e9a9ecce238bcef85200c98aac1fa4101c1cc457cc311bd5e881fb10" Dec 16 12:29:36.675755 kubelet[2901]: I1216 12:29:36.675707 2901 scope.go:117] "RemoveContainer" containerID="7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594" Dec 16 12:29:36.676023 containerd[1706]: time="2025-12-16T12:29:36.675908832Z" level=info msg="CreateContainer within sandbox \"8e718408ecbab92160c0ea662d6e04d286bf859bda6fcda3e81c81a3a34c99df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 12:29:36.678231 containerd[1706]: time="2025-12-16T12:29:36.678179879Z" level=info msg="CreateContainer within sandbox \"d7350c866bfa7e896b607f7679907710e90492fadcb80635fe8c0a1f3b34bedf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 12:29:36.689244 containerd[1706]: time="2025-12-16T12:29:36.689183194Z" level=info msg="Container 8bc7fb40d8b10f2bcb574a875b7a86561a3160e7fe16170891b52157445f5389: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:36.699125 containerd[1706]: time="2025-12-16T12:29:36.698705105Z" level=info msg="Container 1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:36.699416 containerd[1706]: time="2025-12-16T12:29:36.699389627Z" level=info msg="CreateContainer within sandbox \"8e718408ecbab92160c0ea662d6e04d286bf859bda6fcda3e81c81a3a34c99df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8bc7fb40d8b10f2bcb574a875b7a86561a3160e7fe16170891b52157445f5389\"" Dec 16 12:29:36.700568 containerd[1706]: time="2025-12-16T12:29:36.700540511Z" level=info msg="StartContainer for \"8bc7fb40d8b10f2bcb574a875b7a86561a3160e7fe16170891b52157445f5389\"" Dec 16 12:29:36.702914 containerd[1706]: time="2025-12-16T12:29:36.702271037Z" level=info msg="connecting to shim 8bc7fb40d8b10f2bcb574a875b7a86561a3160e7fe16170891b52157445f5389" address="unix:///run/containerd/s/e4e7fd41c1512e0d5ed56545a15a766381c48cf2dbc057c1469d2eb03d2051de" protocol=ttrpc version=3 Dec 16 12:29:36.711399 containerd[1706]: time="2025-12-16T12:29:36.711346546Z" level=info msg="CreateContainer within sandbox \"d7350c866bfa7e896b607f7679907710e90492fadcb80635fe8c0a1f3b34bedf\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad\"" Dec 16 12:29:36.712189 containerd[1706]: time="2025-12-16T12:29:36.712163869Z" level=info msg="StartContainer for \"1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad\"" Dec 16 12:29:36.713355 containerd[1706]: time="2025-12-16T12:29:36.713328592Z" level=info msg="connecting to shim 1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad" address="unix:///run/containerd/s/05820b25d913420fa3fe8602da2646c990a4ba386391b331ccb72282ffd154d2" protocol=ttrpc version=3 Dec 16 12:29:36.726517 systemd[1]: Started cri-containerd-8bc7fb40d8b10f2bcb574a875b7a86561a3160e7fe16170891b52157445f5389.scope - libcontainer container 8bc7fb40d8b10f2bcb574a875b7a86561a3160e7fe16170891b52157445f5389. Dec 16 12:29:36.741488 systemd[1]: Started cri-containerd-1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad.scope - libcontainer container 1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad. Dec 16 12:29:36.747000 audit: BPF prog-id=257 op=LOAD Dec 16 12:29:36.750332 kernel: audit: type=1334 audit(1765888176.747:884): prog-id=257 op=LOAD Dec 16 12:29:36.749000 audit: BPF prog-id=258 op=LOAD Dec 16 12:29:36.749000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2609 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.754880 kernel: audit: type=1334 audit(1765888176.749:885): prog-id=258 op=LOAD Dec 16 12:29:36.754959 kernel: audit: type=1300 audit(1765888176.749:885): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2609 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.754986 kernel: audit: type=1327 audit(1765888176.749:885): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633766623430643862313066326263623537346138373562376138 Dec 16 12:29:36.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633766623430643862313066326263623537346138373562376138 Dec 16 12:29:36.750000 audit: BPF prog-id=258 op=UNLOAD Dec 16 12:29:36.750000 audit[5847]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633766623430643862313066326263623537346138373562376138 Dec 16 12:29:36.750000 audit: BPF prog-id=259 op=LOAD Dec 16 12:29:36.750000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2609 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633766623430643862313066326263623537346138373562376138 Dec 16 12:29:36.750000 audit: BPF prog-id=260 op=LOAD Dec 16 12:29:36.750000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2609 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633766623430643862313066326263623537346138373562376138 Dec 16 12:29:36.756000 audit: BPF prog-id=260 op=UNLOAD Dec 16 12:29:36.756000 audit[5847]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633766623430643862313066326263623537346138373562376138 Dec 16 12:29:36.756000 audit: BPF prog-id=259 op=UNLOAD Dec 16 12:29:36.756000 audit[5847]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633766623430643862313066326263623537346138373562376138 Dec 16 12:29:36.756000 audit: BPF prog-id=261 op=LOAD Dec 16 12:29:36.756000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2609 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862633766623430643862313066326263623537346138373562376138 Dec 16 12:29:36.758000 audit: BPF prog-id=262 op=LOAD Dec 16 12:29:36.759000 audit: BPF prog-id=263 op=LOAD Dec 16 12:29:36.759000 audit[5858]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3103 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383834343635303831633237656534383935333836356436353731 Dec 16 12:29:36.759000 audit: BPF prog-id=263 op=UNLOAD Dec 16 12:29:36.759000 audit[5858]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3103 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383834343635303831633237656534383935333836356436353731 Dec 16 12:29:36.760000 audit: BPF prog-id=264 op=LOAD Dec 16 12:29:36.760000 audit[5858]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3103 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383834343635303831633237656534383935333836356436353731 Dec 16 12:29:36.760000 audit: BPF prog-id=265 op=LOAD Dec 16 12:29:36.760000 audit[5858]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3103 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383834343635303831633237656534383935333836356436353731 Dec 16 12:29:36.760000 audit: BPF prog-id=265 op=UNLOAD Dec 16 12:29:36.760000 audit[5858]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3103 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383834343635303831633237656534383935333836356436353731 Dec 16 12:29:36.760000 audit: BPF prog-id=264 op=UNLOAD Dec 16 12:29:36.760000 audit[5858]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3103 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383834343635303831633237656534383935333836356436353731 Dec 16 12:29:36.760000 audit: BPF prog-id=266 op=LOAD Dec 16 12:29:36.760000 audit[5858]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3103 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:36.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163383834343635303831633237656534383935333836356436353731 Dec 16 12:29:36.786861 containerd[1706]: time="2025-12-16T12:29:36.786799109Z" level=info msg="StartContainer for \"1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad\" returns successfully" Dec 16 12:29:36.792821 containerd[1706]: time="2025-12-16T12:29:36.792781768Z" level=info msg="StartContainer for \"8bc7fb40d8b10f2bcb574a875b7a86561a3160e7fe16170891b52157445f5389\" returns successfully" Dec 16 12:29:39.057486 kubelet[2901]: E1216 12:29:39.057429 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:29:39.057486 kubelet[2901]: E1216 12:29:39.057429 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-62gh6" podUID="ea12a60c-4683-4d5e-8e8f-9b466a85a781" Dec 16 12:29:39.620031 systemd[1]: cri-containerd-10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178.scope: Deactivated successfully. Dec 16 12:29:39.620712 systemd[1]: cri-containerd-10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178.scope: Consumed 3.620s CPU time, 22.8M memory peak. Dec 16 12:29:39.621995 containerd[1706]: time="2025-12-16T12:29:39.621731247Z" level=info msg="received container exit event container_id:\"10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178\" id:\"10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178\" pid:2748 exit_status:1 exited_at:{seconds:1765888179 nanos:621172325}" Dec 16 12:29:39.620000 audit: BPF prog-id=267 op=LOAD Dec 16 12:29:39.621000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:29:39.624000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:29:39.624000 audit: BPF prog-id=107 op=UNLOAD Dec 16 12:29:39.644595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178-rootfs.mount: Deactivated successfully. Dec 16 12:29:39.690461 kubelet[2901]: I1216 12:29:39.690430 2901 scope.go:117] "RemoveContainer" containerID="10bf267ed1babf989394565d5cab8c19614576344c5b32c3bf7248ab7ea8e178" Dec 16 12:29:39.692134 containerd[1706]: time="2025-12-16T12:29:39.692097553Z" level=info msg="CreateContainer within sandbox \"409e2e7052fed2ca7a2dd5ac6a218b07aecebc97ad8f60b6a562bda69c05095a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 12:29:39.701237 containerd[1706]: time="2025-12-16T12:29:39.701111662Z" level=info msg="Container a0590dbb91482f6a80596887f2636fdf31199a721f233952405ac927a31904e6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:39.709575 containerd[1706]: time="2025-12-16T12:29:39.709537650Z" level=info msg="CreateContainer within sandbox \"409e2e7052fed2ca7a2dd5ac6a218b07aecebc97ad8f60b6a562bda69c05095a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a0590dbb91482f6a80596887f2636fdf31199a721f233952405ac927a31904e6\"" Dec 16 12:29:39.710136 containerd[1706]: time="2025-12-16T12:29:39.710106531Z" level=info msg="StartContainer for \"a0590dbb91482f6a80596887f2636fdf31199a721f233952405ac927a31904e6\"" Dec 16 12:29:39.711200 containerd[1706]: time="2025-12-16T12:29:39.711175015Z" level=info msg="connecting to shim a0590dbb91482f6a80596887f2636fdf31199a721f233952405ac927a31904e6" address="unix:///run/containerd/s/d99be28a8ab9ae79e2fca8638d1f001846516b3af76926f48ca62b4f71a583fb" protocol=ttrpc version=3 Dec 16 12:29:39.733528 systemd[1]: Started cri-containerd-a0590dbb91482f6a80596887f2636fdf31199a721f233952405ac927a31904e6.scope - libcontainer container a0590dbb91482f6a80596887f2636fdf31199a721f233952405ac927a31904e6. Dec 16 12:29:39.743000 audit: BPF prog-id=268 op=LOAD Dec 16 12:29:39.743000 audit: BPF prog-id=269 op=LOAD Dec 16 12:29:39.743000 audit[5945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2596 pid=5945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:39.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353930646262393134383266366138303539363838376632363336 Dec 16 12:29:39.743000 audit: BPF prog-id=269 op=UNLOAD Dec 16 12:29:39.743000 audit[5945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=5945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:39.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353930646262393134383266366138303539363838376632363336 Dec 16 12:29:39.743000 audit: BPF prog-id=270 op=LOAD Dec 16 12:29:39.743000 audit[5945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2596 pid=5945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:39.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353930646262393134383266366138303539363838376632363336 Dec 16 12:29:39.743000 audit: BPF prog-id=271 op=LOAD Dec 16 12:29:39.743000 audit[5945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2596 pid=5945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:39.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353930646262393134383266366138303539363838376632363336 Dec 16 12:29:39.743000 audit: BPF prog-id=271 op=UNLOAD Dec 16 12:29:39.743000 audit[5945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=5945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:39.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353930646262393134383266366138303539363838376632363336 Dec 16 12:29:39.744000 audit: BPF prog-id=270 op=UNLOAD Dec 16 12:29:39.744000 audit[5945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=5945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:39.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353930646262393134383266366138303539363838376632363336 Dec 16 12:29:39.744000 audit: BPF prog-id=272 op=LOAD Dec 16 12:29:39.744000 audit[5945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2596 pid=5945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:29:39.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353930646262393134383266366138303539363838376632363336 Dec 16 12:29:39.774301 containerd[1706]: time="2025-12-16T12:29:39.774254138Z" level=info msg="StartContainer for \"a0590dbb91482f6a80596887f2636fdf31199a721f233952405ac927a31904e6\" returns successfully" Dec 16 12:29:40.060755 kubelet[2901]: E1216 12:29:40.060714 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h72ht" podUID="5b2b3263-cc70-4a4f-a835-4543e7a31ab8" Dec 16 12:29:40.062429 kubelet[2901]: E1216 12:29:40.062391 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-k8fpw" podUID="1da2d440-02fc-4a40-abf1-80ffcd9275c1" Dec 16 12:29:44.354643 kubelet[2901]: E1216 12:29:44.354561 2901 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T12:29:34Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T12:29:34Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T12:29:34Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T12:29:34Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-7-179ea8c226\": Patch \"https://10.0.21.226:6443/api/v1/nodes/ci-4515-1-0-7-179ea8c226/status?timeout=10s\": context deadline exceeded" Dec 16 12:29:45.040753 kubelet[2901]: E1216 12:29:45.040693 2901 controller.go:195] "Failed to update lease" err="Put \"https://10.0.21.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-7-179ea8c226?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:29:46.057099 kubelet[2901]: E1216 12:29:46.057051 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f6dbc5-vnxvc" podUID="f8c50491-6041-443b-aedf-13a9fee1a718" Dec 16 12:29:47.853351 kernel: pcieport 0000:00:01.0: pciehp: Slot(0): Button press: will power off in 5 sec Dec 16 12:29:47.979577 systemd[1]: cri-containerd-1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad.scope: Deactivated successfully. Dec 16 12:29:47.980508 containerd[1706]: time="2025-12-16T12:29:47.980431068Z" level=info msg="received container exit event container_id:\"1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad\" id:\"1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad\" pid:5879 exit_status:1 exited_at:{seconds:1765888187 nanos:979997187}" Dec 16 12:29:47.985000 audit: BPF prog-id=262 op=UNLOAD Dec 16 12:29:47.988151 kernel: kauditd_printk_skb: 66 callbacks suppressed Dec 16 12:29:47.988231 kernel: audit: type=1334 audit(1765888187.985:912): prog-id=262 op=UNLOAD Dec 16 12:29:47.988255 kernel: audit: type=1334 audit(1765888187.985:913): prog-id=266 op=UNLOAD Dec 16 12:29:47.985000 audit: BPF prog-id=266 op=UNLOAD Dec 16 12:29:48.002891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad-rootfs.mount: Deactivated successfully. Dec 16 12:29:48.716130 kubelet[2901]: I1216 12:29:48.716100 2901 scope.go:117] "RemoveContainer" containerID="7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594" Dec 16 12:29:48.716708 kubelet[2901]: I1216 12:29:48.716656 2901 scope.go:117] "RemoveContainer" containerID="1c884465081c27ee48953865d6571a2e5b7660e1840a8df361401e0d6ccdd6ad" Dec 16 12:29:48.716850 kubelet[2901]: E1216 12:29:48.716824 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-crhxs_tigera-operator(42d78fd9-5855-41b6-acb7-40210908f8e0)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-crhxs" podUID="42d78fd9-5855-41b6-acb7-40210908f8e0" Dec 16 12:29:48.718383 containerd[1706]: time="2025-12-16T12:29:48.718349726Z" level=info msg="RemoveContainer for \"7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594\"" Dec 16 12:29:48.723233 containerd[1706]: time="2025-12-16T12:29:48.723190982Z" level=info msg="RemoveContainer for \"7c4e02926ffe227c7b17f2ed5d0925b16be3007692949ec0b6089f68eaa49594\" returns successfully" Dec 16 12:29:50.057307 kubelet[2901]: E1216 12:29:50.057213 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dbdbb8c6-9lrp5" podUID="06792be6-fad3-4b79-a250-73afb10c06a6" Dec 16 12:29:50.058201 kubelet[2901]: E1216 12:29:50.057640 2901 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c89f55df9-rhjdd" podUID="392ff8c8-2fc8-4f21-b0b8-6c2ae06ddf5e"