Dec 16 02:08:52.376749 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 02:08:52.376773 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 02:08:52.376788 kernel: KASLR enabled Dec 16 02:08:52.376794 kernel: efi: EFI v2.7 by EDK II Dec 16 02:08:52.376800 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438351218 Dec 16 02:08:52.376806 kernel: random: crng init done Dec 16 02:08:52.376813 kernel: secureboot: Secure boot disabled Dec 16 02:08:52.376819 kernel: ACPI: Early table checksum verification disabled Dec 16 02:08:52.376825 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Dec 16 02:08:52.376834 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Dec 16 02:08:52.376842 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376848 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376855 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376874 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376887 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376897 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376904 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376910 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376917 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376923 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:08:52.376930 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 02:08:52.376936 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 16 02:08:52.376943 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 02:08:52.376950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Dec 16 02:08:52.376957 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Dec 16 02:08:52.376963 kernel: Zone ranges: Dec 16 02:08:52.376969 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 16 02:08:52.376976 kernel: DMA32 empty Dec 16 02:08:52.376982 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Dec 16 02:08:52.376988 kernel: Device empty Dec 16 02:08:52.376995 kernel: Movable zone start for each node Dec 16 02:08:52.377001 kernel: Early memory node ranges Dec 16 02:08:52.377007 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Dec 16 02:08:52.377014 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Dec 16 02:08:52.377020 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Dec 16 02:08:52.377028 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Dec 16 02:08:52.377034 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Dec 16 02:08:52.377041 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Dec 16 02:08:52.377048 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Dec 16 02:08:52.377054 kernel: psci: probing for conduit method from ACPI. Dec 16 02:08:52.377063 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 02:08:52.377072 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 02:08:52.377079 kernel: psci: Trusted OS migration not required Dec 16 02:08:52.377086 kernel: psci: SMC Calling Convention v1.1 Dec 16 02:08:52.377093 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 02:08:52.377100 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 02:08:52.377107 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 02:08:52.377113 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Dec 16 02:08:52.377120 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Dec 16 02:08:52.377128 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 02:08:52.377135 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 02:08:52.377142 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 02:08:52.377149 kernel: Detected PIPT I-cache on CPU0 Dec 16 02:08:52.377156 kernel: CPU features: detected: GIC system register CPU interface Dec 16 02:08:52.377162 kernel: CPU features: detected: Spectre-v4 Dec 16 02:08:52.377169 kernel: CPU features: detected: Spectre-BHB Dec 16 02:08:52.377176 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 02:08:52.377183 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 02:08:52.377190 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 02:08:52.377196 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 02:08:52.377205 kernel: alternatives: applying boot alternatives Dec 16 02:08:52.377213 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:08:52.377220 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 16 02:08:52.377227 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 02:08:52.377234 kernel: Fallback order for Node 0: 0 Dec 16 02:08:52.377241 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Dec 16 02:08:52.377248 kernel: Policy zone: Normal Dec 16 02:08:52.377255 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 02:08:52.377261 kernel: software IO TLB: area num 4. Dec 16 02:08:52.377268 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Dec 16 02:08:52.377276 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 02:08:52.377283 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 02:08:52.377291 kernel: rcu: RCU event tracing is enabled. Dec 16 02:08:52.377298 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 02:08:52.377305 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 02:08:52.377312 kernel: Tracing variant of Tasks RCU enabled. Dec 16 02:08:52.377318 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 02:08:52.377325 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 02:08:52.377332 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 02:08:52.377339 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 02:08:52.377346 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 02:08:52.377354 kernel: GICv3: 256 SPIs implemented Dec 16 02:08:52.377361 kernel: GICv3: 0 Extended SPIs implemented Dec 16 02:08:52.377368 kernel: Root IRQ handler: gic_handle_irq Dec 16 02:08:52.377374 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 02:08:52.377381 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 02:08:52.377388 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 02:08:52.377395 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 02:08:52.377402 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 02:08:52.377409 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Dec 16 02:08:52.377416 kernel: GICv3: using LPI property table @0x0000000100130000 Dec 16 02:08:52.377422 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Dec 16 02:08:52.377429 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 02:08:52.377437 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 02:08:52.377444 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 02:08:52.377451 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 02:08:52.377458 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 02:08:52.377465 kernel: arm-pv: using stolen time PV Dec 16 02:08:52.377472 kernel: Console: colour dummy device 80x25 Dec 16 02:08:52.377479 kernel: ACPI: Core revision 20240827 Dec 16 02:08:52.377487 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 02:08:52.377495 kernel: pid_max: default: 32768 minimum: 301 Dec 16 02:08:52.377503 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 02:08:52.377510 kernel: landlock: Up and running. Dec 16 02:08:52.377517 kernel: SELinux: Initializing. Dec 16 02:08:52.377524 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 02:08:52.377532 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 02:08:52.377539 kernel: rcu: Hierarchical SRCU implementation. Dec 16 02:08:52.377546 kernel: rcu: Max phase no-delay instances is 400. Dec 16 02:08:52.377555 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 02:08:52.377562 kernel: Remapping and enabling EFI services. Dec 16 02:08:52.377569 kernel: smp: Bringing up secondary CPUs ... Dec 16 02:08:52.377577 kernel: Detected PIPT I-cache on CPU1 Dec 16 02:08:52.377584 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 02:08:52.377591 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Dec 16 02:08:52.377598 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 02:08:52.377607 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 02:08:52.377614 kernel: Detected PIPT I-cache on CPU2 Dec 16 02:08:52.377626 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 02:08:52.377635 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Dec 16 02:08:52.377643 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 02:08:52.377650 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 02:08:52.377657 kernel: Detected PIPT I-cache on CPU3 Dec 16 02:08:52.377665 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 02:08:52.377674 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Dec 16 02:08:52.377681 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 02:08:52.377689 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 02:08:52.377697 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 02:08:52.377704 kernel: SMP: Total of 4 processors activated. Dec 16 02:08:52.377712 kernel: CPU: All CPU(s) started at EL1 Dec 16 02:08:52.377721 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 02:08:52.377728 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 02:08:52.377749 kernel: CPU features: detected: Common not Private translations Dec 16 02:08:52.377756 kernel: CPU features: detected: CRC32 instructions Dec 16 02:08:52.377764 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 02:08:52.377771 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 02:08:52.377779 kernel: CPU features: detected: LSE atomic instructions Dec 16 02:08:52.377789 kernel: CPU features: detected: Privileged Access Never Dec 16 02:08:52.377796 kernel: CPU features: detected: RAS Extension Support Dec 16 02:08:52.377804 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 02:08:52.377811 kernel: alternatives: applying system-wide alternatives Dec 16 02:08:52.377819 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 02:08:52.377827 kernel: Memory: 16324432K/16777216K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 430000K reserved, 16384K cma-reserved) Dec 16 02:08:52.377835 kernel: devtmpfs: initialized Dec 16 02:08:52.377843 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 02:08:52.377852 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 02:08:52.377873 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 02:08:52.377881 kernel: 0 pages in range for non-PLT usage Dec 16 02:08:52.377889 kernel: 515168 pages in range for PLT usage Dec 16 02:08:52.377896 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 02:08:52.377903 kernel: SMBIOS 3.0.0 present. Dec 16 02:08:52.377911 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 16 02:08:52.377921 kernel: DMI: Memory slots populated: 1/1 Dec 16 02:08:52.377928 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 02:08:52.377936 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Dec 16 02:08:52.377943 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 02:08:52.377951 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 02:08:52.377958 kernel: audit: initializing netlink subsys (disabled) Dec 16 02:08:52.377966 kernel: audit: type=2000 audit(0.039:1): state=initialized audit_enabled=0 res=1 Dec 16 02:08:52.377975 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 02:08:52.377982 kernel: cpuidle: using governor menu Dec 16 02:08:52.377990 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 02:08:52.377997 kernel: ASID allocator initialised with 32768 entries Dec 16 02:08:52.378005 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 02:08:52.378012 kernel: Serial: AMBA PL011 UART driver Dec 16 02:08:52.378020 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 02:08:52.378029 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 02:08:52.378036 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 02:08:52.378044 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 02:08:52.378051 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 02:08:52.378059 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 02:08:52.378066 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 02:08:52.378073 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 02:08:52.378081 kernel: ACPI: Added _OSI(Module Device) Dec 16 02:08:52.378090 kernel: ACPI: Added _OSI(Processor Device) Dec 16 02:08:52.378097 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 02:08:52.378105 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 02:08:52.378112 kernel: ACPI: Interpreter enabled Dec 16 02:08:52.378120 kernel: ACPI: Using GIC for interrupt routing Dec 16 02:08:52.378127 kernel: ACPI: MCFG table detected, 1 entries Dec 16 02:08:52.378134 kernel: ACPI: CPU0 has been hot-added Dec 16 02:08:52.378143 kernel: ACPI: CPU1 has been hot-added Dec 16 02:08:52.378151 kernel: ACPI: CPU2 has been hot-added Dec 16 02:08:52.378158 kernel: ACPI: CPU3 has been hot-added Dec 16 02:08:52.378166 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 02:08:52.378173 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 02:08:52.378181 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 02:08:52.378351 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 02:08:52.378444 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 02:08:52.378538 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 02:08:52.378617 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 02:08:52.378698 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 02:08:52.378708 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 02:08:52.378715 kernel: PCI host bridge to bus 0000:00 Dec 16 02:08:52.378804 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 02:08:52.378893 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 02:08:52.378969 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 02:08:52.379041 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 02:08:52.379138 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 02:08:52.379231 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.379317 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Dec 16 02:08:52.379396 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 16 02:08:52.379475 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Dec 16 02:08:52.379554 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Dec 16 02:08:52.379641 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.379723 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Dec 16 02:08:52.379802 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 16 02:08:52.379896 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Dec 16 02:08:52.379990 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.380072 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Dec 16 02:08:52.380152 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 16 02:08:52.380234 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Dec 16 02:08:52.380313 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Dec 16 02:08:52.380400 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.380481 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Dec 16 02:08:52.380559 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 16 02:08:52.380640 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Dec 16 02:08:52.380727 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.380806 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Dec 16 02:08:52.380896 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 16 02:08:52.380978 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Dec 16 02:08:52.381059 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Dec 16 02:08:52.381150 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.381230 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Dec 16 02:08:52.381309 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 16 02:08:52.381388 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Dec 16 02:08:52.381467 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Dec 16 02:08:52.381554 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.381635 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Dec 16 02:08:52.381713 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 16 02:08:52.381846 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.381947 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Dec 16 02:08:52.382028 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 16 02:08:52.382116 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.382201 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Dec 16 02:08:52.382281 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 16 02:08:52.382368 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.382448 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Dec 16 02:08:52.382529 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 16 02:08:52.382616 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.382696 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Dec 16 02:08:52.382775 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 16 02:08:52.382870 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.382953 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Dec 16 02:08:52.383046 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 16 02:08:52.383139 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.383240 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Dec 16 02:08:52.383353 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 16 02:08:52.383445 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.383525 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Dec 16 02:08:52.383608 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 16 02:08:52.383697 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.383777 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Dec 16 02:08:52.383870 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 16 02:08:52.383970 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.384058 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Dec 16 02:08:52.384138 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 16 02:08:52.384222 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.384302 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Dec 16 02:08:52.384389 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 16 02:08:52.384477 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.384560 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Dec 16 02:08:52.384641 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 16 02:08:52.384720 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Dec 16 02:08:52.384798 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Dec 16 02:08:52.384898 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.384981 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Dec 16 02:08:52.385063 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 16 02:08:52.385142 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Dec 16 02:08:52.385221 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Dec 16 02:08:52.385312 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.385391 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Dec 16 02:08:52.385469 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 16 02:08:52.385549 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Dec 16 02:08:52.385627 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Dec 16 02:08:52.385713 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.385811 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Dec 16 02:08:52.385910 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 16 02:08:52.385995 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Dec 16 02:08:52.386079 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Dec 16 02:08:52.386169 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.386249 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Dec 16 02:08:52.386328 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 16 02:08:52.386407 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Dec 16 02:08:52.386485 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Dec 16 02:08:52.386585 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.386666 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Dec 16 02:08:52.386745 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 16 02:08:52.386825 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Dec 16 02:08:52.386924 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Dec 16 02:08:52.387014 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.387097 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Dec 16 02:08:52.387176 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 16 02:08:52.387256 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Dec 16 02:08:52.387335 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Dec 16 02:08:52.387420 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.387500 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Dec 16 02:08:52.387580 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 16 02:08:52.387659 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Dec 16 02:08:52.387737 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Dec 16 02:08:52.387823 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.387916 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Dec 16 02:08:52.388000 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 16 02:08:52.388082 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Dec 16 02:08:52.388161 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Dec 16 02:08:52.388246 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.388326 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Dec 16 02:08:52.388403 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 16 02:08:52.388481 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Dec 16 02:08:52.388561 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Dec 16 02:08:52.388645 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.388724 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Dec 16 02:08:52.388802 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 16 02:08:52.388892 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Dec 16 02:08:52.388986 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Dec 16 02:08:52.389088 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.389169 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Dec 16 02:08:52.389248 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 16 02:08:52.389328 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Dec 16 02:08:52.389407 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Dec 16 02:08:52.389494 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.389586 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Dec 16 02:08:52.389669 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 16 02:08:52.389762 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Dec 16 02:08:52.389850 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Dec 16 02:08:52.389963 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.390045 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Dec 16 02:08:52.390124 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 16 02:08:52.390202 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Dec 16 02:08:52.390280 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Dec 16 02:08:52.390390 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.390474 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Dec 16 02:08:52.390553 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 16 02:08:52.390635 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Dec 16 02:08:52.390713 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Dec 16 02:08:52.390798 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 02:08:52.390892 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Dec 16 02:08:52.390974 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 16 02:08:52.391053 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Dec 16 02:08:52.391131 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Dec 16 02:08:52.391220 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 16 02:08:52.391315 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Dec 16 02:08:52.391404 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 02:08:52.391485 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 16 02:08:52.391575 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 02:08:52.391663 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Dec 16 02:08:52.391752 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Dec 16 02:08:52.391835 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Dec 16 02:08:52.391941 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Dec 16 02:08:52.392032 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 16 02:08:52.392120 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Dec 16 02:08:52.392217 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 02:08:52.392308 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Dec 16 02:08:52.392393 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Dec 16 02:08:52.392482 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Dec 16 02:08:52.392563 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Dec 16 02:08:52.392643 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Dec 16 02:08:52.392725 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 16 02:08:52.392810 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 16 02:08:52.392901 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 16 02:08:52.392986 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 16 02:08:52.393065 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 16 02:08:52.393146 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 16 02:08:52.393228 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 02:08:52.393320 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 16 02:08:52.393401 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 16 02:08:52.393485 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 02:08:52.393572 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 16 02:08:52.393656 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 16 02:08:52.393752 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 02:08:52.393841 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 16 02:08:52.393947 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Dec 16 02:08:52.394034 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 02:08:52.394119 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 16 02:08:52.394202 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 16 02:08:52.394286 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 02:08:52.394369 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Dec 16 02:08:52.394462 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Dec 16 02:08:52.394557 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 02:08:52.394643 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 16 02:08:52.394722 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 16 02:08:52.394805 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 02:08:52.394896 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 16 02:08:52.394978 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 16 02:08:52.395064 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 16 02:08:52.395144 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Dec 16 02:08:52.395222 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Dec 16 02:08:52.395307 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Dec 16 02:08:52.395386 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 16 02:08:52.395471 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Dec 16 02:08:52.395557 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Dec 16 02:08:52.395636 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Dec 16 02:08:52.395716 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Dec 16 02:08:52.395799 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Dec 16 02:08:52.395909 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Dec 16 02:08:52.396001 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Dec 16 02:08:52.396090 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 16 02:08:52.396173 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Dec 16 02:08:52.396252 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Dec 16 02:08:52.396336 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 16 02:08:52.396417 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Dec 16 02:08:52.396504 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Dec 16 02:08:52.396589 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 16 02:08:52.396670 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Dec 16 02:08:52.396749 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Dec 16 02:08:52.396831 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 16 02:08:52.396928 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Dec 16 02:08:52.397013 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Dec 16 02:08:52.397097 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 16 02:08:52.397178 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Dec 16 02:08:52.397257 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Dec 16 02:08:52.397339 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Dec 16 02:08:52.397422 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Dec 16 02:08:52.397501 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Dec 16 02:08:52.397586 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Dec 16 02:08:52.397668 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Dec 16 02:08:52.397771 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Dec 16 02:08:52.397868 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Dec 16 02:08:52.397960 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Dec 16 02:08:52.398044 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Dec 16 02:08:52.398129 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 16 02:08:52.398211 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Dec 16 02:08:52.398290 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Dec 16 02:08:52.398375 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 16 02:08:52.398458 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Dec 16 02:08:52.398539 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Dec 16 02:08:52.398622 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 16 02:08:52.398705 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Dec 16 02:08:52.398785 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Dec 16 02:08:52.398880 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 16 02:08:52.398969 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Dec 16 02:08:52.399052 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Dec 16 02:08:52.399138 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 16 02:08:52.399219 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Dec 16 02:08:52.399317 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Dec 16 02:08:52.399405 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Dec 16 02:08:52.399486 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Dec 16 02:08:52.399563 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Dec 16 02:08:52.399645 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Dec 16 02:08:52.399726 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Dec 16 02:08:52.399804 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Dec 16 02:08:52.399912 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 16 02:08:52.399997 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Dec 16 02:08:52.400085 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Dec 16 02:08:52.400180 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 16 02:08:52.400268 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Dec 16 02:08:52.400482 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Dec 16 02:08:52.400568 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 16 02:08:52.400655 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Dec 16 02:08:52.400736 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Dec 16 02:08:52.400837 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 16 02:08:52.400938 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Dec 16 02:08:52.401025 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Dec 16 02:08:52.401109 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 16 02:08:52.401190 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Dec 16 02:08:52.401270 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Dec 16 02:08:52.401353 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Dec 16 02:08:52.401434 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Dec 16 02:08:52.401520 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Dec 16 02:08:52.401603 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Dec 16 02:08:52.401695 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Dec 16 02:08:52.401810 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Dec 16 02:08:52.401908 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Dec 16 02:08:52.401995 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Dec 16 02:08:52.402082 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Dec 16 02:08:52.402161 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Dec 16 02:08:52.402241 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Dec 16 02:08:52.402320 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Dec 16 02:08:52.402402 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Dec 16 02:08:52.402485 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Dec 16 02:08:52.402573 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Dec 16 02:08:52.402660 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Dec 16 02:08:52.402745 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Dec 16 02:08:52.402829 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Dec 16 02:08:52.402939 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Dec 16 02:08:52.403024 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Dec 16 02:08:52.403106 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Dec 16 02:08:52.403189 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Dec 16 02:08:52.403289 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Dec 16 02:08:52.403369 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Dec 16 02:08:52.403451 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Dec 16 02:08:52.403531 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Dec 16 02:08:52.403612 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Dec 16 02:08:52.403693 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Dec 16 02:08:52.403775 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Dec 16 02:08:52.403857 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Dec 16 02:08:52.403958 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Dec 16 02:08:52.404038 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Dec 16 02:08:52.404119 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Dec 16 02:08:52.404202 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Dec 16 02:08:52.404284 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Dec 16 02:08:52.404363 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Dec 16 02:08:52.404454 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Dec 16 02:08:52.404537 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Dec 16 02:08:52.404630 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Dec 16 02:08:52.404711 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Dec 16 02:08:52.404794 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Dec 16 02:08:52.404890 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Dec 16 02:08:52.404977 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Dec 16 02:08:52.405056 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Dec 16 02:08:52.405137 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Dec 16 02:08:52.405215 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Dec 16 02:08:52.405299 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Dec 16 02:08:52.405391 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Dec 16 02:08:52.405481 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Dec 16 02:08:52.405568 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Dec 16 02:08:52.405651 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Dec 16 02:08:52.405741 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Dec 16 02:08:52.405833 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Dec 16 02:08:52.405927 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Dec 16 02:08:52.406010 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Dec 16 02:08:52.406090 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Dec 16 02:08:52.406171 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Dec 16 02:08:52.406251 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Dec 16 02:08:52.406334 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Dec 16 02:08:52.406413 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Dec 16 02:08:52.406494 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Dec 16 02:08:52.406574 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Dec 16 02:08:52.406654 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Dec 16 02:08:52.406740 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Dec 16 02:08:52.406821 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Dec 16 02:08:52.406915 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Dec 16 02:08:52.406998 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Dec 16 02:08:52.407076 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 02:08:52.407156 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Dec 16 02:08:52.407236 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 02:08:52.407315 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Dec 16 02:08:52.407404 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 02:08:52.407485 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Dec 16 02:08:52.407564 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 02:08:52.407644 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Dec 16 02:08:52.407723 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 02:08:52.407802 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Dec 16 02:08:52.407905 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 02:08:52.407993 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Dec 16 02:08:52.408073 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 02:08:52.408159 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Dec 16 02:08:52.408242 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 02:08:52.408322 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Dec 16 02:08:52.408401 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Dec 16 02:08:52.408484 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Dec 16 02:08:52.408563 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Dec 16 02:08:52.408642 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Dec 16 02:08:52.408721 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Dec 16 02:08:52.408800 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Dec 16 02:08:52.408910 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Dec 16 02:08:52.409001 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Dec 16 02:08:52.409095 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Dec 16 02:08:52.409179 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Dec 16 02:08:52.409267 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Dec 16 02:08:52.409356 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Dec 16 02:08:52.409450 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Dec 16 02:08:52.409535 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Dec 16 02:08:52.409615 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.409694 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.409789 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Dec 16 02:08:52.409884 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.409971 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.410052 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Dec 16 02:08:52.410131 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.410210 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.410290 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Dec 16 02:08:52.410377 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.410459 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.410543 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Dec 16 02:08:52.410622 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.410709 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.410793 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Dec 16 02:08:52.410887 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.410969 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.411053 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Dec 16 02:08:52.411131 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.411210 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.411291 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Dec 16 02:08:52.411370 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.411448 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.411531 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Dec 16 02:08:52.411611 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.411690 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.411778 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Dec 16 02:08:52.411868 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.411950 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.412034 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Dec 16 02:08:52.412119 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.412198 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.412279 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Dec 16 02:08:52.412360 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.412444 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.412533 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Dec 16 02:08:52.412617 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.412698 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.412778 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Dec 16 02:08:52.412856 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.412946 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.413029 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Dec 16 02:08:52.413110 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.413193 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.413275 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Dec 16 02:08:52.413356 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.413436 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.413526 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Dec 16 02:08:52.413607 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.413690 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.413793 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Dec 16 02:08:52.413893 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.413982 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.414084 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 02:08:52.414168 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Dec 16 02:08:52.414255 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Dec 16 02:08:52.414335 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Dec 16 02:08:52.414415 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 02:08:52.414496 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Dec 16 02:08:52.414579 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Dec 16 02:08:52.414660 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Dec 16 02:08:52.414742 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Dec 16 02:08:52.414826 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Dec 16 02:08:52.414927 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Dec 16 02:08:52.415011 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Dec 16 02:08:52.415094 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Dec 16 02:08:52.415179 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Dec 16 02:08:52.415263 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Dec 16 02:08:52.415346 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.415430 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.415516 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.415598 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.415700 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.415781 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.415874 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.415956 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.416040 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.416122 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.416203 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.416283 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.416364 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.416445 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.416536 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.416622 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.416704 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.416786 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.416877 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.416960 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.417044 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.417128 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.417211 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.417292 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.417378 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.417459 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.417560 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.417642 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.417724 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.417823 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.417927 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.418010 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.418097 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.418177 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.418260 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 02:08:52.418341 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Dec 16 02:08:52.418441 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Dec 16 02:08:52.418525 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 02:08:52.418611 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Dec 16 02:08:52.418702 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 16 02:08:52.418784 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Dec 16 02:08:52.418876 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 16 02:08:52.418966 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Dec 16 02:08:52.419048 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 16 02:08:52.419131 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Dec 16 02:08:52.419212 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 16 02:08:52.419298 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Dec 16 02:08:52.419384 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Dec 16 02:08:52.419464 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 16 02:08:52.419544 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Dec 16 02:08:52.419627 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 16 02:08:52.419714 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Dec 16 02:08:52.419794 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 16 02:08:52.419882 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Dec 16 02:08:52.419963 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 16 02:08:52.420049 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Dec 16 02:08:52.420134 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Dec 16 02:08:52.420214 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 16 02:08:52.420293 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Dec 16 02:08:52.420373 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 16 02:08:52.420459 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Dec 16 02:08:52.420541 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Dec 16 02:08:52.420628 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 16 02:08:52.420709 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 16 02:08:52.420791 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 16 02:08:52.420922 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 16 02:08:52.421015 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 16 02:08:52.421108 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 16 02:08:52.421198 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 16 02:08:52.421279 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 16 02:08:52.421362 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 16 02:08:52.421445 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 16 02:08:52.421525 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Dec 16 02:08:52.421606 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 16 02:08:52.421692 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 16 02:08:52.421798 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Dec 16 02:08:52.421897 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Dec 16 02:08:52.421983 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 16 02:08:52.422063 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Dec 16 02:08:52.422149 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Dec 16 02:08:52.422231 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 16 02:08:52.422312 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Dec 16 02:08:52.422392 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Dec 16 02:08:52.422474 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 16 02:08:52.422555 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Dec 16 02:08:52.422639 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Dec 16 02:08:52.422720 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 16 02:08:52.422803 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Dec 16 02:08:52.422895 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 16 02:08:52.422976 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 16 02:08:52.423058 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Dec 16 02:08:52.423139 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 16 02:08:52.423219 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 16 02:08:52.423306 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Dec 16 02:08:52.423392 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 16 02:08:52.423477 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 16 02:08:52.423557 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Dec 16 02:08:52.423639 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Dec 16 02:08:52.423721 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 16 02:08:52.423803 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Dec 16 02:08:52.423898 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Dec 16 02:08:52.423987 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 16 02:08:52.424070 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Dec 16 02:08:52.424150 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Dec 16 02:08:52.424232 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Dec 16 02:08:52.424318 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 16 02:08:52.424399 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Dec 16 02:08:52.424481 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Dec 16 02:08:52.424566 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Dec 16 02:08:52.424652 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 16 02:08:52.424736 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Dec 16 02:08:52.424820 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Dec 16 02:08:52.424932 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Dec 16 02:08:52.425019 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 16 02:08:52.425109 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Dec 16 02:08:52.425191 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Dec 16 02:08:52.425272 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 16 02:08:52.425355 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 16 02:08:52.425435 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Dec 16 02:08:52.425516 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Dec 16 02:08:52.425595 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 16 02:08:52.425679 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 16 02:08:52.425779 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Dec 16 02:08:52.425887 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Dec 16 02:08:52.425975 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 16 02:08:52.426061 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 16 02:08:52.426141 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Dec 16 02:08:52.426221 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Dec 16 02:08:52.426306 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Dec 16 02:08:52.426388 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 16 02:08:52.426469 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Dec 16 02:08:52.426550 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Dec 16 02:08:52.426629 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Dec 16 02:08:52.426712 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 16 02:08:52.426796 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Dec 16 02:08:52.426893 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Dec 16 02:08:52.426983 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Dec 16 02:08:52.427089 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 16 02:08:52.427173 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Dec 16 02:08:52.427257 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Dec 16 02:08:52.427338 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Dec 16 02:08:52.427425 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 16 02:08:52.427509 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Dec 16 02:08:52.427594 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Dec 16 02:08:52.427677 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Dec 16 02:08:52.427763 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 16 02:08:52.427853 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Dec 16 02:08:52.427950 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Dec 16 02:08:52.428039 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 16 02:08:52.428125 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 16 02:08:52.428209 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Dec 16 02:08:52.428288 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Dec 16 02:08:52.428369 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 16 02:08:52.428454 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 16 02:08:52.428540 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Dec 16 02:08:52.428620 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Dec 16 02:08:52.428702 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 16 02:08:52.428785 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 16 02:08:52.428892 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Dec 16 02:08:52.428999 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Dec 16 02:08:52.429080 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Dec 16 02:08:52.429165 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 02:08:52.429256 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 02:08:52.429334 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 02:08:52.429420 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 16 02:08:52.429497 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 16 02:08:52.429580 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 16 02:08:52.429658 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 16 02:08:52.429759 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 16 02:08:52.429838 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 16 02:08:52.429936 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 16 02:08:52.430013 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 16 02:08:52.430099 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 16 02:08:52.430175 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 16 02:08:52.430256 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 16 02:08:52.430332 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 16 02:08:52.430420 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 16 02:08:52.430497 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 16 02:08:52.430579 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 16 02:08:52.430653 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 16 02:08:52.430734 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 16 02:08:52.430808 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 16 02:08:52.430909 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Dec 16 02:08:52.430985 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Dec 16 02:08:52.431066 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Dec 16 02:08:52.431140 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Dec 16 02:08:52.431220 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Dec 16 02:08:52.431296 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Dec 16 02:08:52.431377 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Dec 16 02:08:52.431452 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Dec 16 02:08:52.431534 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Dec 16 02:08:52.431611 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 16 02:08:52.431698 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Dec 16 02:08:52.431774 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 16 02:08:52.431867 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Dec 16 02:08:52.431946 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 16 02:08:52.432027 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Dec 16 02:08:52.432101 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Dec 16 02:08:52.432184 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Dec 16 02:08:52.432258 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Dec 16 02:08:52.432339 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Dec 16 02:08:52.432414 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Dec 16 02:08:52.432490 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Dec 16 02:08:52.432569 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Dec 16 02:08:52.432645 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Dec 16 02:08:52.432719 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Dec 16 02:08:52.432800 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Dec 16 02:08:52.432887 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Dec 16 02:08:52.432968 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Dec 16 02:08:52.433051 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Dec 16 02:08:52.433125 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Dec 16 02:08:52.433199 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 16 02:08:52.433280 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Dec 16 02:08:52.433357 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Dec 16 02:08:52.433431 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 16 02:08:52.433512 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Dec 16 02:08:52.433586 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Dec 16 02:08:52.433660 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 16 02:08:52.433761 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Dec 16 02:08:52.433848 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Dec 16 02:08:52.433939 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Dec 16 02:08:52.434024 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Dec 16 02:08:52.434099 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Dec 16 02:08:52.434174 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Dec 16 02:08:52.434254 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 16 02:08:52.434333 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Dec 16 02:08:52.434407 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Dec 16 02:08:52.434488 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Dec 16 02:08:52.434563 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Dec 16 02:08:52.434643 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Dec 16 02:08:52.434730 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Dec 16 02:08:52.434806 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Dec 16 02:08:52.434900 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Dec 16 02:08:52.434986 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Dec 16 02:08:52.435061 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Dec 16 02:08:52.435135 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 16 02:08:52.435218 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Dec 16 02:08:52.435293 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Dec 16 02:08:52.435373 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 16 02:08:52.435458 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Dec 16 02:08:52.435534 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Dec 16 02:08:52.435607 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 16 02:08:52.435695 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Dec 16 02:08:52.435770 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Dec 16 02:08:52.435844 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Dec 16 02:08:52.435854 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 02:08:52.435880 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 02:08:52.435889 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 02:08:52.435900 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 02:08:52.435908 kernel: iommu: Default domain type: Translated Dec 16 02:08:52.435916 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 02:08:52.435924 kernel: efivars: Registered efivars operations Dec 16 02:08:52.435932 kernel: vgaarb: loaded Dec 16 02:08:52.435940 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 02:08:52.435948 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 02:08:52.435958 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 02:08:52.435966 kernel: pnp: PnP ACPI init Dec 16 02:08:52.436067 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 02:08:52.436080 kernel: pnp: PnP ACPI: found 1 devices Dec 16 02:08:52.436088 kernel: NET: Registered PF_INET protocol family Dec 16 02:08:52.436096 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 02:08:52.436105 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 16 02:08:52.436115 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 02:08:52.436123 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 02:08:52.436131 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 02:08:52.436140 kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 16 02:08:52.436148 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 16 02:08:52.436156 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 16 02:08:52.436164 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 02:08:52.436255 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 16 02:08:52.436267 kernel: PCI: CLS 0 bytes, default 64 Dec 16 02:08:52.436275 kernel: kvm [1]: HYP mode not available Dec 16 02:08:52.436283 kernel: Initialise system trusted keyrings Dec 16 02:08:52.436291 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Dec 16 02:08:52.436299 kernel: Key type asymmetric registered Dec 16 02:08:52.436307 kernel: Asymmetric key parser 'x509' registered Dec 16 02:08:52.436317 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 02:08:52.436325 kernel: io scheduler mq-deadline registered Dec 16 02:08:52.436333 kernel: io scheduler kyber registered Dec 16 02:08:52.436341 kernel: io scheduler bfq registered Dec 16 02:08:52.436350 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 02:08:52.436432 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Dec 16 02:08:52.436515 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Dec 16 02:08:52.436597 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.436679 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Dec 16 02:08:52.436760 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Dec 16 02:08:52.436841 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.436945 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Dec 16 02:08:52.437034 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Dec 16 02:08:52.437117 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.437199 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Dec 16 02:08:52.437279 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Dec 16 02:08:52.437358 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.437439 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Dec 16 02:08:52.437520 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Dec 16 02:08:52.437602 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.437683 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Dec 16 02:08:52.437781 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Dec 16 02:08:52.437878 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.437970 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Dec 16 02:08:52.438052 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Dec 16 02:08:52.438131 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.438218 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Dec 16 02:08:52.438298 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Dec 16 02:08:52.438378 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.438389 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 16 02:08:52.438468 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Dec 16 02:08:52.438550 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Dec 16 02:08:52.438632 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.438713 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Dec 16 02:08:52.438794 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Dec 16 02:08:52.438893 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.438981 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Dec 16 02:08:52.439065 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Dec 16 02:08:52.439148 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.439235 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Dec 16 02:08:52.439322 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Dec 16 02:08:52.439403 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.439486 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Dec 16 02:08:52.439566 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Dec 16 02:08:52.439646 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.439731 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Dec 16 02:08:52.439828 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Dec 16 02:08:52.439921 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.440005 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Dec 16 02:08:52.440086 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Dec 16 02:08:52.440167 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.440252 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Dec 16 02:08:52.440334 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Dec 16 02:08:52.440413 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.440424 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 16 02:08:52.440507 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Dec 16 02:08:52.440597 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Dec 16 02:08:52.440678 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.440780 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Dec 16 02:08:52.440882 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Dec 16 02:08:52.440970 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.441053 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Dec 16 02:08:52.441151 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Dec 16 02:08:52.441236 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.441324 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Dec 16 02:08:52.441407 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Dec 16 02:08:52.441498 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.441586 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Dec 16 02:08:52.441670 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Dec 16 02:08:52.441771 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.441880 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Dec 16 02:08:52.441967 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Dec 16 02:08:52.442047 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.442129 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Dec 16 02:08:52.442209 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Dec 16 02:08:52.442288 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.442374 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Dec 16 02:08:52.442454 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Dec 16 02:08:52.442532 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.442543 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 16 02:08:52.442623 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Dec 16 02:08:52.442710 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Dec 16 02:08:52.442792 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.442900 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Dec 16 02:08:52.442993 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Dec 16 02:08:52.443075 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.443158 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Dec 16 02:08:52.443241 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Dec 16 02:08:52.443322 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.443430 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Dec 16 02:08:52.443522 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Dec 16 02:08:52.443604 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.443688 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Dec 16 02:08:52.443773 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Dec 16 02:08:52.443855 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.443962 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Dec 16 02:08:52.444046 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Dec 16 02:08:52.444127 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.444210 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Dec 16 02:08:52.444294 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Dec 16 02:08:52.444376 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.444460 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Dec 16 02:08:52.444544 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Dec 16 02:08:52.444634 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.444720 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Dec 16 02:08:52.444810 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Dec 16 02:08:52.444899 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 02:08:52.444911 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 02:08:52.444922 kernel: ACPI: button: Power Button [PWRB] Dec 16 02:08:52.445014 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Dec 16 02:08:52.445104 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 16 02:08:52.445115 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 02:08:52.445123 kernel: thunder_xcv, ver 1.0 Dec 16 02:08:52.445132 kernel: thunder_bgx, ver 1.0 Dec 16 02:08:52.445140 kernel: nicpf, ver 1.0 Dec 16 02:08:52.445151 kernel: nicvf, ver 1.0 Dec 16 02:08:52.445248 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 02:08:52.445329 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T02:08:51 UTC (1765850931) Dec 16 02:08:52.445340 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 02:08:52.445348 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 02:08:52.445356 kernel: watchdog: NMI not fully supported Dec 16 02:08:52.445366 kernel: watchdog: Hard watchdog permanently disabled Dec 16 02:08:52.445375 kernel: NET: Registered PF_INET6 protocol family Dec 16 02:08:52.445383 kernel: Segment Routing with IPv6 Dec 16 02:08:52.445391 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 02:08:52.445399 kernel: NET: Registered PF_PACKET protocol family Dec 16 02:08:52.445407 kernel: Key type dns_resolver registered Dec 16 02:08:52.445415 kernel: registered taskstats version 1 Dec 16 02:08:52.445423 kernel: Loading compiled-in X.509 certificates Dec 16 02:08:52.445433 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 02:08:52.445441 kernel: Demotion targets for Node 0: null Dec 16 02:08:52.445450 kernel: Key type .fscrypt registered Dec 16 02:08:52.445458 kernel: Key type fscrypt-provisioning registered Dec 16 02:08:52.445466 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 02:08:52.445474 kernel: ima: Allocated hash algorithm: sha1 Dec 16 02:08:52.445482 kernel: ima: No architecture policies found Dec 16 02:08:52.445491 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 02:08:52.445499 kernel: clk: Disabling unused clocks Dec 16 02:08:52.445508 kernel: PM: genpd: Disabling unused power domains Dec 16 02:08:52.445515 kernel: Freeing unused kernel memory: 12480K Dec 16 02:08:52.445523 kernel: Run /init as init process Dec 16 02:08:52.445531 kernel: with arguments: Dec 16 02:08:52.445539 kernel: /init Dec 16 02:08:52.445549 kernel: with environment: Dec 16 02:08:52.445556 kernel: HOME=/ Dec 16 02:08:52.445564 kernel: TERM=linux Dec 16 02:08:52.445572 kernel: ACPI: bus type USB registered Dec 16 02:08:52.445580 kernel: usbcore: registered new interface driver usbfs Dec 16 02:08:52.445588 kernel: usbcore: registered new interface driver hub Dec 16 02:08:52.445596 kernel: usbcore: registered new device driver usb Dec 16 02:08:52.445690 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 02:08:52.445802 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 16 02:08:52.445903 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 02:08:52.445991 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 02:08:52.446075 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 16 02:08:52.446165 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 16 02:08:52.446284 kernel: hub 1-0:1.0: USB hub found Dec 16 02:08:52.446390 kernel: hub 1-0:1.0: 4 ports detected Dec 16 02:08:52.446493 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 02:08:52.446602 kernel: hub 2-0:1.0: USB hub found Dec 16 02:08:52.446691 kernel: hub 2-0:1.0: 4 ports detected Dec 16 02:08:52.446810 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 02:08:52.446909 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Dec 16 02:08:52.446922 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 02:08:52.446931 kernel: GPT:25804799 != 104857599 Dec 16 02:08:52.446939 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 02:08:52.446947 kernel: GPT:25804799 != 104857599 Dec 16 02:08:52.446955 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 02:08:52.446966 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 02:08:52.446975 kernel: SCSI subsystem initialized Dec 16 02:08:52.446983 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 02:08:52.446992 kernel: device-mapper: uevent: version 1.0.3 Dec 16 02:08:52.447001 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 02:08:52.447009 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 02:08:52.447018 kernel: raid6: neonx8 gen() 15650 MB/s Dec 16 02:08:52.447027 kernel: raid6: neonx4 gen() 15650 MB/s Dec 16 02:08:52.447036 kernel: raid6: neonx2 gen() 13126 MB/s Dec 16 02:08:52.447044 kernel: raid6: neonx1 gen() 10416 MB/s Dec 16 02:08:52.447052 kernel: raid6: int64x8 gen() 6792 MB/s Dec 16 02:08:52.447061 kernel: raid6: int64x4 gen() 7291 MB/s Dec 16 02:08:52.447069 kernel: raid6: int64x2 gen() 6071 MB/s Dec 16 02:08:52.447078 kernel: raid6: int64x1 gen() 5028 MB/s Dec 16 02:08:52.447087 kernel: raid6: using algorithm neonx8 gen() 15650 MB/s Dec 16 02:08:52.447096 kernel: raid6: .... xor() 11857 MB/s, rmw enabled Dec 16 02:08:52.447105 kernel: raid6: using neon recovery algorithm Dec 16 02:08:52.447114 kernel: xor: measuring software checksum speed Dec 16 02:08:52.447125 kernel: 8regs : 21101 MB/sec Dec 16 02:08:52.447133 kernel: 32regs : 21326 MB/sec Dec 16 02:08:52.447141 kernel: arm64_neon : 28167 MB/sec Dec 16 02:08:52.447151 kernel: xor: using function: arm64_neon (28167 MB/sec) Dec 16 02:08:52.447261 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 02:08:52.447275 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 02:08:52.447284 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (274) Dec 16 02:08:52.447292 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 02:08:52.447301 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:08:52.447311 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 02:08:52.447320 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 02:08:52.447328 kernel: loop: module loaded Dec 16 02:08:52.447337 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 02:08:52.447345 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 02:08:52.447355 systemd[1]: Successfully made /usr/ read-only. Dec 16 02:08:52.447367 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:08:52.447377 systemd[1]: Detected virtualization kvm. Dec 16 02:08:52.447477 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 16 02:08:52.447490 systemd[1]: Detected architecture arm64. Dec 16 02:08:52.447498 systemd[1]: Running in initrd. Dec 16 02:08:52.447507 systemd[1]: No hostname configured, using default hostname. Dec 16 02:08:52.447517 systemd[1]: Hostname set to . Dec 16 02:08:52.447526 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 02:08:52.447535 systemd[1]: Queued start job for default target initrd.target. Dec 16 02:08:52.447543 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:08:52.447552 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:08:52.447561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:08:52.447572 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 02:08:52.447581 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:08:52.447590 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 02:08:52.447599 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 02:08:52.447608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:08:52.447617 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:08:52.447627 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:08:52.447636 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:08:52.447644 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:08:52.447655 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:08:52.447664 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:08:52.447673 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:08:52.447681 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:08:52.447692 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:08:52.447700 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 02:08:52.447709 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 02:08:52.447718 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:08:52.447727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:08:52.447735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:08:52.447744 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:08:52.447755 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 02:08:52.447763 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 02:08:52.447772 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:08:52.447781 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 02:08:52.447790 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 02:08:52.447799 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 02:08:52.447809 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:08:52.447818 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:08:52.447827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:08:52.447836 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 02:08:52.447846 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:08:52.447856 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 02:08:52.447882 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 02:08:52.447915 systemd-journald[417]: Collecting audit messages is enabled. Dec 16 02:08:52.447939 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 02:08:52.447948 kernel: Bridge firewalling registered Dec 16 02:08:52.447957 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:08:52.447966 kernel: audit: type=1130 audit(1765850932.390:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.447975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:08:52.447984 kernel: audit: type=1130 audit(1765850932.394:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.447994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 02:08:52.448004 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:08:52.448013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:08:52.448022 kernel: audit: type=1130 audit(1765850932.410:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.448031 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:08:52.448040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:08:52.448051 kernel: audit: type=1130 audit(1765850932.420:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.448060 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:08:52.448069 kernel: audit: type=1334 audit(1765850932.422:6): prog-id=6 op=LOAD Dec 16 02:08:52.448077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:08:52.448087 kernel: audit: type=1130 audit(1765850932.430:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.448095 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:08:52.448106 kernel: audit: type=1130 audit(1765850932.435:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.448115 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 02:08:52.448125 systemd-journald[417]: Journal started Dec 16 02:08:52.448144 systemd-journald[417]: Runtime Journal (/run/log/journal/8135d536eed64c9daebc4058683f04af) is 8M, max 319.5M, 311.5M free. Dec 16 02:08:52.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.422000 audit: BPF prog-id=6 op=LOAD Dec 16 02:08:52.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.383837 systemd-modules-load[418]: Inserted module 'br_netfilter' Dec 16 02:08:52.467945 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:08:52.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.472540 systemd-resolved[437]: Positive Trust Anchors: Dec 16 02:08:52.473503 kernel: audit: type=1130 audit(1765850932.468:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.472557 systemd-resolved[437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:08:52.472560 systemd-resolved[437]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:08:52.472592 systemd-resolved[437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:08:52.473003 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:08:52.487720 dracut-cmdline[444]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:08:52.498018 systemd-tmpfiles[455]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 02:08:52.499895 systemd-resolved[437]: Defaulting to hostname 'linux'. Dec 16 02:08:52.500699 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:08:52.501839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:08:52.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.506877 kernel: audit: type=1130 audit(1765850932.500:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.507011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:08:52.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.566984 kernel: Loading iSCSI transport class v2.0-870. Dec 16 02:08:52.577905 kernel: iscsi: registered transport (tcp) Dec 16 02:08:52.591876 kernel: iscsi: registered transport (qla4xxx) Dec 16 02:08:52.591903 kernel: QLogic iSCSI HBA Driver Dec 16 02:08:52.613843 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:08:52.635360 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:08:52.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.636803 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:08:52.684331 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 02:08:52.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.686732 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 02:08:52.688338 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 02:08:52.722478 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:08:52.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.723000 audit: BPF prog-id=7 op=LOAD Dec 16 02:08:52.723000 audit: BPF prog-id=8 op=LOAD Dec 16 02:08:52.724986 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:08:52.754682 systemd-udevd[695]: Using default interface naming scheme 'v257'. Dec 16 02:08:52.762599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:08:52.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.765553 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 02:08:52.791309 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:08:52.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.792000 audit: BPF prog-id=9 op=LOAD Dec 16 02:08:52.794316 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:08:52.796841 dracut-pre-trigger[766]: rd.md=0: removing MD RAID activation Dec 16 02:08:52.819534 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:08:52.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.821986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:08:52.834279 systemd-networkd[809]: lo: Link UP Dec 16 02:08:52.834288 systemd-networkd[809]: lo: Gained carrier Dec 16 02:08:52.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.834895 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:08:52.836418 systemd[1]: Reached target network.target - Network. Dec 16 02:08:52.910885 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:08:52.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:52.914149 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 02:08:53.003550 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 02:08:53.014056 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 02:08:53.020822 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 16 02:08:53.020854 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 16 02:08:53.022796 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 16 02:08:53.024757 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 02:08:53.037110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 02:08:53.038942 systemd-networkd[809]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:08:53.038946 systemd-networkd[809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:08:53.039705 systemd-networkd[809]: eth0: Link UP Dec 16 02:08:53.039998 systemd-networkd[809]: eth0: Gained carrier Dec 16 02:08:53.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:53.040010 systemd-networkd[809]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:08:53.042523 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 02:08:53.044161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:08:53.044274 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:08:53.045399 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:08:53.049104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:08:53.062829 disk-uuid[879]: Primary Header is updated. Dec 16 02:08:53.062829 disk-uuid[879]: Secondary Entries is updated. Dec 16 02:08:53.062829 disk-uuid[879]: Secondary Header is updated. Dec 16 02:08:53.068155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:08:53.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:53.074029 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 16 02:08:53.074278 kernel: usbcore: registered new interface driver usbhid Dec 16 02:08:53.074991 kernel: usbhid: USB HID core driver Dec 16 02:08:53.114956 systemd-networkd[809]: eth0: DHCPv4 address 10.0.26.207/25, gateway 10.0.26.129 acquired from 10.0.26.129 Dec 16 02:08:53.138970 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 02:08:53.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:53.140475 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:08:53.141839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:08:53.143829 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:08:53.146729 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 02:08:53.170996 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:08:53.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.097780 disk-uuid[881]: Warning: The kernel is still using the old partition table. Dec 16 02:08:54.097780 disk-uuid[881]: The new table will be used at the next reboot or after you Dec 16 02:08:54.097780 disk-uuid[881]: run partprobe(8) or kpartx(8) Dec 16 02:08:54.097780 disk-uuid[881]: The operation has completed successfully. Dec 16 02:08:54.102844 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 02:08:54.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.102975 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 02:08:54.104924 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 02:08:54.142901 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (911) Dec 16 02:08:54.145244 kernel: BTRFS info (device vda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:08:54.145289 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:08:54.148987 kernel: BTRFS info (device vda6): turning on async discard Dec 16 02:08:54.149021 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 02:08:54.154885 kernel: BTRFS info (device vda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:08:54.155363 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 02:08:54.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.157581 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 02:08:54.294346 ignition[930]: Ignition 2.24.0 Dec 16 02:08:54.294359 ignition[930]: Stage: fetch-offline Dec 16 02:08:54.294398 ignition[930]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:08:54.294407 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 02:08:54.294565 ignition[930]: parsed url from cmdline: "" Dec 16 02:08:54.299166 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:08:54.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.294568 ignition[930]: no config URL provided Dec 16 02:08:54.294573 ignition[930]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 02:08:54.294580 ignition[930]: no config at "/usr/lib/ignition/user.ign" Dec 16 02:08:54.302376 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 02:08:54.294584 ignition[930]: failed to fetch config: resource requires networking Dec 16 02:08:54.294729 ignition[930]: Ignition finished successfully Dec 16 02:08:54.331410 ignition[941]: Ignition 2.24.0 Dec 16 02:08:54.331433 ignition[941]: Stage: fetch Dec 16 02:08:54.331586 ignition[941]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:08:54.331595 ignition[941]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 02:08:54.331674 ignition[941]: parsed url from cmdline: "" Dec 16 02:08:54.331677 ignition[941]: no config URL provided Dec 16 02:08:54.331681 ignition[941]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 02:08:54.331686 ignition[941]: no config at "/usr/lib/ignition/user.ign" Dec 16 02:08:54.332094 ignition[941]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 02:08:54.332120 ignition[941]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 02:08:54.332487 ignition[941]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 02:08:54.559089 systemd-networkd[809]: eth0: Gained IPv6LL Dec 16 02:08:54.712440 ignition[941]: GET result: OK Dec 16 02:08:54.712663 ignition[941]: parsing config with SHA512: c4fa912478015bbcccaeee7a28f97064e3c5e5d47bbac90b58d6a42ea126bc31a7d4d9c1c3a5a9c2312b4460505abfad636838880271d9402c9f45e4d1d229ae Dec 16 02:08:54.718827 unknown[941]: fetched base config from "system" Dec 16 02:08:54.718838 unknown[941]: fetched base config from "system" Dec 16 02:08:54.719175 ignition[941]: fetch: fetch complete Dec 16 02:08:54.718843 unknown[941]: fetched user config from "openstack" Dec 16 02:08:54.719180 ignition[941]: fetch: fetch passed Dec 16 02:08:54.719223 ignition[941]: Ignition finished successfully Dec 16 02:08:54.726264 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 16 02:08:54.726290 kernel: audit: type=1130 audit(1765850934.722:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.721952 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 02:08:54.727239 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 02:08:54.761433 ignition[949]: Ignition 2.24.0 Dec 16 02:08:54.761454 ignition[949]: Stage: kargs Dec 16 02:08:54.761603 ignition[949]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:08:54.761612 ignition[949]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 02:08:54.762364 ignition[949]: kargs: kargs passed Dec 16 02:08:54.762408 ignition[949]: Ignition finished successfully Dec 16 02:08:54.766588 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 02:08:54.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.768737 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 02:08:54.772211 kernel: audit: type=1130 audit(1765850934.766:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.795747 ignition[956]: Ignition 2.24.0 Dec 16 02:08:54.795768 ignition[956]: Stage: disks Dec 16 02:08:54.795940 ignition[956]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:08:54.795948 ignition[956]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 02:08:54.799209 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 02:08:54.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.796662 ignition[956]: disks: disks passed Dec 16 02:08:54.805236 kernel: audit: type=1130 audit(1765850934.800:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.801164 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 02:08:54.796704 ignition[956]: Ignition finished successfully Dec 16 02:08:54.804746 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 02:08:54.806241 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:08:54.807740 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:08:54.809130 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:08:54.811623 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 02:08:54.853707 systemd-fsck[965]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 02:08:54.857659 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 02:08:54.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.860184 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 02:08:54.864622 kernel: audit: type=1130 audit(1765850934.859:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:54.957926 kernel: EXT4-fs (vda9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 02:08:54.958428 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 02:08:54.959645 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 02:08:54.962643 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:08:54.964463 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 02:08:54.965424 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 02:08:54.966099 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 02:08:54.968848 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 02:08:54.969440 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:08:54.984726 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 02:08:54.987063 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 02:08:54.996876 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (973) Dec 16 02:08:54.999433 kernel: BTRFS info (device vda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:08:54.999505 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:08:55.003922 kernel: BTRFS info (device vda6): turning on async discard Dec 16 02:08:55.003987 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 02:08:55.005256 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:08:55.036897 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:08:55.128549 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 02:08:55.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:55.130952 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 02:08:55.134649 kernel: audit: type=1130 audit(1765850935.128:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:55.134606 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 02:08:55.147907 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 02:08:55.150896 kernel: BTRFS info (device vda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:08:55.167072 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 02:08:55.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:55.171892 kernel: audit: type=1130 audit(1765850935.168:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:55.179104 ignition[1074]: INFO : Ignition 2.24.0 Dec 16 02:08:55.179104 ignition[1074]: INFO : Stage: mount Dec 16 02:08:55.181821 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:08:55.181821 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 02:08:55.181821 ignition[1074]: INFO : mount: mount passed Dec 16 02:08:55.181821 ignition[1074]: INFO : Ignition finished successfully Dec 16 02:08:55.188126 kernel: audit: type=1130 audit(1765850935.184:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:55.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:55.182710 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 02:08:56.064934 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:08:58.068930 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:02.074898 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:02.083540 coreos-metadata[975]: Dec 16 02:09:02.083 WARN failed to locate config-drive, using the metadata service API instead Dec 16 02:09:02.103594 coreos-metadata[975]: Dec 16 02:09:02.103 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 02:09:02.233103 coreos-metadata[975]: Dec 16 02:09:02.233 INFO Fetch successful Dec 16 02:09:02.234373 coreos-metadata[975]: Dec 16 02:09:02.233 INFO wrote hostname ci-4547-0-0-9-b4376e68e3 to /sysroot/etc/hostname Dec 16 02:09:02.236023 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 02:09:02.237284 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 02:09:02.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:02.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:02.239670 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 02:09:02.248069 kernel: audit: type=1130 audit(1765850942.237:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:02.248094 kernel: audit: type=1131 audit(1765850942.237:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:02.270310 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:09:02.299986 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1092) Dec 16 02:09:02.302418 kernel: BTRFS info (device vda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:09:02.302443 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:09:02.306904 kernel: BTRFS info (device vda6): turning on async discard Dec 16 02:09:02.306948 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 02:09:02.308386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:09:02.336704 ignition[1110]: INFO : Ignition 2.24.0 Dec 16 02:09:02.336704 ignition[1110]: INFO : Stage: files Dec 16 02:09:02.338753 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:09:02.338753 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 02:09:02.338753 ignition[1110]: DEBUG : files: compiled without relabeling support, skipping Dec 16 02:09:02.342687 ignition[1110]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 02:09:02.342687 ignition[1110]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 02:09:02.346422 ignition[1110]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 02:09:02.346422 ignition[1110]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 02:09:02.346422 ignition[1110]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 02:09:02.343762 unknown[1110]: wrote ssh authorized keys file for user: core Dec 16 02:09:02.351436 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 02:09:02.351436 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 02:09:02.406650 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 02:09:02.573152 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 02:09:02.573152 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 02:09:02.577102 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 02:09:02.849292 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 02:09:03.383543 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 02:09:03.386108 ignition[1110]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 02:09:03.387957 ignition[1110]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:09:03.391891 ignition[1110]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:09:03.391891 ignition[1110]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 02:09:03.391891 ignition[1110]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 02:09:03.391891 ignition[1110]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 02:09:03.402979 kernel: audit: type=1130 audit(1765850943.396:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.403047 ignition[1110]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:09:03.403047 ignition[1110]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:09:03.403047 ignition[1110]: INFO : files: files passed Dec 16 02:09:03.403047 ignition[1110]: INFO : Ignition finished successfully Dec 16 02:09:03.396019 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 02:09:03.398157 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 02:09:03.422360 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 02:09:03.425587 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 02:09:03.425713 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 02:09:03.432572 kernel: audit: type=1130 audit(1765850943.427:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.432609 kernel: audit: type=1131 audit(1765850943.427:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.436275 initrd-setup-root-after-ignition[1143]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:09:03.437773 initrd-setup-root-after-ignition[1147]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:09:03.439038 initrd-setup-root-after-ignition[1143]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:09:03.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.438731 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:09:03.447151 kernel: audit: type=1130 audit(1765850943.440:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.441202 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 02:09:03.447029 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 02:09:03.487162 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 02:09:03.487288 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 02:09:03.494481 kernel: audit: type=1130 audit(1765850943.488:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.494507 kernel: audit: type=1131 audit(1765850943.488:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.489200 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 02:09:03.495330 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 02:09:03.497052 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 02:09:03.497980 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 02:09:03.532631 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:09:03.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.535076 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 02:09:03.539003 kernel: audit: type=1130 audit(1765850943.533:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.555404 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:09:03.555612 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:09:03.557913 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:09:03.559834 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 02:09:03.561451 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 02:09:03.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.561583 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:09:03.566787 kernel: audit: type=1131 audit(1765850943.562:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.565853 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 02:09:03.567708 systemd[1]: Stopped target basic.target - Basic System. Dec 16 02:09:03.569132 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 02:09:03.570644 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:09:03.572366 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 02:09:03.574017 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:09:03.575658 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 02:09:03.577256 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:09:03.579028 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 02:09:03.580728 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 02:09:03.582263 systemd[1]: Stopped target swap.target - Swaps. Dec 16 02:09:03.583600 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 02:09:03.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.583726 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:09:03.585664 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:09:03.586779 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:09:03.588516 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 02:09:03.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.588592 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:09:03.590277 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 02:09:03.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.590394 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 02:09:03.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.592621 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 02:09:03.592747 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:09:03.595031 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 02:09:03.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.595138 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 02:09:03.597546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 02:09:03.598922 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 02:09:03.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.599063 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:09:03.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.601738 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 02:09:03.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.603166 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 02:09:03.603299 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:09:03.605023 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 02:09:03.605130 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:09:03.606642 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 02:09:03.606748 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:09:03.613135 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 02:09:03.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.614906 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 02:09:03.622939 ignition[1167]: INFO : Ignition 2.24.0 Dec 16 02:09:03.622939 ignition[1167]: INFO : Stage: umount Dec 16 02:09:03.624473 ignition[1167]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:09:03.624473 ignition[1167]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 02:09:03.624473 ignition[1167]: INFO : umount: umount passed Dec 16 02:09:03.624473 ignition[1167]: INFO : Ignition finished successfully Dec 16 02:09:03.625581 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 02:09:03.626902 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 02:09:03.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.629158 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 02:09:03.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.629591 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 02:09:03.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.629628 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 02:09:03.630707 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 02:09:03.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.630753 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 02:09:03.632289 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 02:09:03.632340 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 02:09:03.633872 systemd[1]: Stopped target network.target - Network. Dec 16 02:09:03.635298 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 02:09:03.635352 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:09:03.636892 systemd[1]: Stopped target paths.target - Path Units. Dec 16 02:09:03.638283 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 02:09:03.641903 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:09:03.643166 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 02:09:03.644613 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 02:09:03.646009 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 02:09:03.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.646048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:09:03.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.647537 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 02:09:03.647566 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:09:03.649467 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 02:09:03.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.649488 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:09:03.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.650919 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 02:09:03.650974 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 02:09:03.652411 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 02:09:03.652453 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 02:09:03.654076 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 02:09:03.655323 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 02:09:03.657044 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 02:09:03.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.657140 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 02:09:03.658676 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 02:09:03.658766 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 02:09:03.664736 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 02:09:03.669000 audit: BPF prog-id=6 op=UNLOAD Dec 16 02:09:03.664851 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 02:09:03.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.670435 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 02:09:03.670542 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 02:09:03.675150 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 02:09:03.676153 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 02:09:03.676198 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:09:03.680000 audit: BPF prog-id=9 op=UNLOAD Dec 16 02:09:03.678589 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 02:09:03.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.679400 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 02:09:03.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.679461 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:09:03.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.681516 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 02:09:03.681577 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:09:03.683077 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 02:09:03.683120 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 02:09:03.684710 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:09:03.693011 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 02:09:03.693159 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:09:03.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.695153 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 02:09:03.695192 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 02:09:03.696757 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 02:09:03.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.696788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:09:03.698731 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 02:09:03.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.698782 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:09:03.701307 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 02:09:03.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.701362 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 02:09:03.703615 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 02:09:03.703668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:09:03.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.706984 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 02:09:03.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.707844 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 02:09:03.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.707923 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:09:03.709897 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 02:09:03.709963 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:09:03.711942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:09:03.711990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:09:03.728149 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 02:09:03.728263 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 02:09:03.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.730437 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 02:09:03.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:03.730537 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 02:09:03.732391 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 02:09:03.734367 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 02:09:03.743795 systemd[1]: Switching root. Dec 16 02:09:03.785274 systemd-journald[417]: Journal stopped Dec 16 02:09:05.273889 systemd-journald[417]: Received SIGTERM from PID 1 (systemd). Dec 16 02:09:05.273971 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 02:09:05.273988 kernel: SELinux: policy capability open_perms=1 Dec 16 02:09:05.274000 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 02:09:05.274017 kernel: SELinux: policy capability always_check_network=0 Dec 16 02:09:05.274027 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 02:09:05.274037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 02:09:05.274047 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 02:09:05.274060 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 02:09:05.274951 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 02:09:05.274978 systemd[1]: Successfully loaded SELinux policy in 67.891ms. Dec 16 02:09:05.275000 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.891ms. Dec 16 02:09:05.275013 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:09:05.275024 systemd[1]: Detected virtualization kvm. Dec 16 02:09:05.275035 systemd[1]: Detected architecture arm64. Dec 16 02:09:05.275045 systemd[1]: Detected first boot. Dec 16 02:09:05.275057 systemd[1]: Hostname set to . Dec 16 02:09:05.275071 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 02:09:05.275082 zram_generator::config[1211]: No configuration found. Dec 16 02:09:05.275099 kernel: NET: Registered PF_VSOCK protocol family Dec 16 02:09:05.275110 systemd[1]: Populated /etc with preset unit settings. Dec 16 02:09:05.275121 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 02:09:05.275131 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 02:09:05.275145 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 02:09:05.275158 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 02:09:05.275170 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 02:09:05.275180 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 02:09:05.275191 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 02:09:05.275203 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 02:09:05.275213 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 02:09:05.275224 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 02:09:05.275237 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 02:09:05.275248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:09:05.275259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:09:05.275270 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 02:09:05.275281 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 02:09:05.275292 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 02:09:05.275303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:09:05.275316 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 02:09:05.275327 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:09:05.275339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:09:05.275350 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 02:09:05.275362 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 02:09:05.275374 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 02:09:05.275386 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 02:09:05.275396 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:09:05.275407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:09:05.275418 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 02:09:05.275430 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:09:05.275441 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:09:05.275453 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 02:09:05.275465 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 02:09:05.275476 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 02:09:05.275491 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:09:05.275501 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 02:09:05.275512 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:09:05.275523 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 02:09:05.275536 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 02:09:05.275547 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:09:05.275557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:09:05.275568 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 02:09:05.275579 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 02:09:05.275589 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 02:09:05.275600 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 02:09:05.275614 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 02:09:05.275625 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 02:09:05.275635 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 02:09:05.275646 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 02:09:05.275657 systemd[1]: Reached target machines.target - Containers. Dec 16 02:09:05.275668 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 02:09:05.275679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:09:05.275692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:09:05.275703 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 02:09:05.275716 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:09:05.275727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:09:05.275739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:09:05.275750 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 02:09:05.275761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:09:05.275772 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 02:09:05.275785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 02:09:05.275796 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 02:09:05.275808 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 02:09:05.275819 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 02:09:05.275831 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:09:05.275841 kernel: fuse: init (API version 7.41) Dec 16 02:09:05.275852 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:09:05.276557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:09:05.276582 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:09:05.276598 kernel: ACPI: bus type drm_connector registered Dec 16 02:09:05.276610 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 02:09:05.276622 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 02:09:05.276633 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:09:05.276670 systemd-journald[1281]: Collecting audit messages is enabled. Dec 16 02:09:05.276699 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 02:09:05.276712 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 02:09:05.276723 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 02:09:05.276734 systemd-journald[1281]: Journal started Dec 16 02:09:05.276756 systemd-journald[1281]: Runtime Journal (/run/log/journal/8135d536eed64c9daebc4058683f04af) is 8M, max 319.5M, 311.5M free. Dec 16 02:09:05.142000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 02:09:05.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.229000 audit: BPF prog-id=14 op=UNLOAD Dec 16 02:09:05.229000 audit: BPF prog-id=13 op=UNLOAD Dec 16 02:09:05.229000 audit: BPF prog-id=15 op=LOAD Dec 16 02:09:05.229000 audit: BPF prog-id=16 op=LOAD Dec 16 02:09:05.229000 audit: BPF prog-id=17 op=LOAD Dec 16 02:09:05.271000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 02:09:05.271000 audit[1281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffffd8b6500 a2=4000 a3=0 items=0 ppid=1 pid=1281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:05.271000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 02:09:05.051399 systemd[1]: Queued start job for default target multi-user.target. Dec 16 02:09:05.074326 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 02:09:05.074808 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 02:09:05.278702 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:09:05.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.279658 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 02:09:05.280948 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 02:09:05.282109 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 02:09:05.283376 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:09:05.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.284932 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 02:09:05.285103 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 02:09:05.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.286511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:09:05.286686 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:09:05.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.288295 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 02:09:05.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.289569 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:09:05.289752 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:09:05.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.291150 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:09:05.291333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:09:05.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.292700 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 02:09:05.292855 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 02:09:05.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.294066 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:09:05.294226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:09:05.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.295664 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:09:05.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.297321 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:09:05.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.299369 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 02:09:05.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.300999 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 02:09:05.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.313632 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:09:05.315043 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 02:09:05.317197 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 02:09:05.319181 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 02:09:05.320181 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 02:09:05.320210 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:09:05.322043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 02:09:05.328476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:09:05.328594 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:09:05.332948 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 02:09:05.334984 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 02:09:05.336058 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:09:05.337028 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 02:09:05.338046 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:09:05.341045 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:09:05.344128 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 02:09:05.347119 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 02:09:05.348707 systemd-journald[1281]: Time spent on flushing to /var/log/journal/8135d536eed64c9daebc4058683f04af is 34.971ms for 1811 entries. Dec 16 02:09:05.348707 systemd-journald[1281]: System Journal (/var/log/journal/8135d536eed64c9daebc4058683f04af) is 8M, max 588.1M, 580.1M free. Dec 16 02:09:05.390761 systemd-journald[1281]: Received client request to flush runtime journal. Dec 16 02:09:05.390796 kernel: loop1: detected capacity change from 0 to 45344 Dec 16 02:09:05.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.349438 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 02:09:05.351471 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 02:09:05.353203 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 02:09:05.357825 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 02:09:05.360964 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 02:09:05.373759 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:09:05.390165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:09:05.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.392367 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 02:09:05.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.403170 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 02:09:05.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.405000 audit: BPF prog-id=18 op=LOAD Dec 16 02:09:05.405000 audit: BPF prog-id=19 op=LOAD Dec 16 02:09:05.406000 audit: BPF prog-id=20 op=LOAD Dec 16 02:09:05.409000 audit: BPF prog-id=21 op=LOAD Dec 16 02:09:05.408192 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 02:09:05.411437 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:09:05.415950 kernel: loop2: detected capacity change from 0 to 200800 Dec 16 02:09:05.416052 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:09:05.417404 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 02:09:05.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.422000 audit: BPF prog-id=22 op=LOAD Dec 16 02:09:05.422000 audit: BPF prog-id=23 op=LOAD Dec 16 02:09:05.422000 audit: BPF prog-id=24 op=LOAD Dec 16 02:09:05.424510 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 02:09:05.425000 audit: BPF prog-id=25 op=LOAD Dec 16 02:09:05.425000 audit: BPF prog-id=26 op=LOAD Dec 16 02:09:05.425000 audit: BPF prog-id=27 op=LOAD Dec 16 02:09:05.427007 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 02:09:05.446346 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Dec 16 02:09:05.446359 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Dec 16 02:09:05.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.451991 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:09:05.458508 systemd-nsresourced[1352]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 02:09:05.459685 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 02:09:05.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.464877 kernel: loop3: detected capacity change from 0 to 1648 Dec 16 02:09:05.472654 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 02:09:05.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.495919 kernel: loop4: detected capacity change from 0 to 100192 Dec 16 02:09:05.522711 systemd-oomd[1348]: No swap; memory pressure usage will be degraded Dec 16 02:09:05.523444 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 02:09:05.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.528903 kernel: loop5: detected capacity change from 0 to 45344 Dec 16 02:09:05.535389 systemd-resolved[1349]: Positive Trust Anchors: Dec 16 02:09:05.535450 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:09:05.535454 systemd-resolved[1349]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:09:05.535485 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:09:05.541890 kernel: loop6: detected capacity change from 0 to 200800 Dec 16 02:09:05.547024 systemd-resolved[1349]: Using system hostname 'ci-4547-0-0-9-b4376e68e3'. Dec 16 02:09:05.549202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:09:05.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.550525 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:09:05.558888 kernel: loop7: detected capacity change from 0 to 1648 Dec 16 02:09:05.564881 kernel: loop1: detected capacity change from 0 to 100192 Dec 16 02:09:05.585639 (sd-merge)[1373]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Dec 16 02:09:05.588436 (sd-merge)[1373]: Merged extensions into '/usr'. Dec 16 02:09:05.592955 systemd[1]: Reload requested from client PID 1332 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 02:09:05.592972 systemd[1]: Reloading... Dec 16 02:09:05.641893 zram_generator::config[1400]: No configuration found. Dec 16 02:09:05.803498 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 02:09:05.803934 systemd[1]: Reloading finished in 210 ms. Dec 16 02:09:05.834270 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 02:09:05.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.836883 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 02:09:05.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:05.850601 systemd[1]: Starting ensure-sysext.service... Dec 16 02:09:05.852344 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:09:05.853000 audit: BPF prog-id=8 op=UNLOAD Dec 16 02:09:05.853000 audit: BPF prog-id=7 op=UNLOAD Dec 16 02:09:05.853000 audit: BPF prog-id=28 op=LOAD Dec 16 02:09:05.853000 audit: BPF prog-id=29 op=LOAD Dec 16 02:09:05.854883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:09:05.855000 audit: BPF prog-id=30 op=LOAD Dec 16 02:09:05.855000 audit: BPF prog-id=25 op=UNLOAD Dec 16 02:09:05.856000 audit: BPF prog-id=31 op=LOAD Dec 16 02:09:05.856000 audit: BPF prog-id=32 op=LOAD Dec 16 02:09:05.856000 audit: BPF prog-id=26 op=UNLOAD Dec 16 02:09:05.856000 audit: BPF prog-id=27 op=UNLOAD Dec 16 02:09:05.856000 audit: BPF prog-id=33 op=LOAD Dec 16 02:09:05.856000 audit: BPF prog-id=18 op=UNLOAD Dec 16 02:09:05.856000 audit: BPF prog-id=34 op=LOAD Dec 16 02:09:05.856000 audit: BPF prog-id=35 op=LOAD Dec 16 02:09:05.856000 audit: BPF prog-id=19 op=UNLOAD Dec 16 02:09:05.856000 audit: BPF prog-id=20 op=UNLOAD Dec 16 02:09:05.857000 audit: BPF prog-id=36 op=LOAD Dec 16 02:09:05.857000 audit: BPF prog-id=21 op=UNLOAD Dec 16 02:09:05.858000 audit: BPF prog-id=37 op=LOAD Dec 16 02:09:05.858000 audit: BPF prog-id=15 op=UNLOAD Dec 16 02:09:05.858000 audit: BPF prog-id=38 op=LOAD Dec 16 02:09:05.858000 audit: BPF prog-id=39 op=LOAD Dec 16 02:09:05.858000 audit: BPF prog-id=16 op=UNLOAD Dec 16 02:09:05.858000 audit: BPF prog-id=17 op=UNLOAD Dec 16 02:09:05.859000 audit: BPF prog-id=40 op=LOAD Dec 16 02:09:05.859000 audit: BPF prog-id=22 op=UNLOAD Dec 16 02:09:05.859000 audit: BPF prog-id=41 op=LOAD Dec 16 02:09:05.859000 audit: BPF prog-id=42 op=LOAD Dec 16 02:09:05.859000 audit: BPF prog-id=23 op=UNLOAD Dec 16 02:09:05.859000 audit: BPF prog-id=24 op=UNLOAD Dec 16 02:09:05.864403 systemd[1]: Reload requested from client PID 1440 ('systemctl') (unit ensure-sysext.service)... Dec 16 02:09:05.864419 systemd[1]: Reloading... Dec 16 02:09:05.867349 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 02:09:05.867383 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 02:09:05.868102 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 02:09:05.869061 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Dec 16 02:09:05.869112 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Dec 16 02:09:05.875081 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:09:05.875099 systemd-tmpfiles[1441]: Skipping /boot Dec 16 02:09:05.879439 systemd-udevd[1442]: Using default interface naming scheme 'v257'. Dec 16 02:09:05.882537 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:09:05.882555 systemd-tmpfiles[1441]: Skipping /boot Dec 16 02:09:05.921128 zram_generator::config[1474]: No configuration found. Dec 16 02:09:06.034897 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 02:09:06.120882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 02:09:06.122375 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 02:09:06.122731 systemd[1]: Reloading finished in 258 ms. Dec 16 02:09:06.136791 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:09:06.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.141000 audit: BPF prog-id=43 op=LOAD Dec 16 02:09:06.142000 audit: BPF prog-id=37 op=UNLOAD Dec 16 02:09:06.142000 audit: BPF prog-id=44 op=LOAD Dec 16 02:09:06.142000 audit: BPF prog-id=45 op=LOAD Dec 16 02:09:06.142000 audit: BPF prog-id=38 op=UNLOAD Dec 16 02:09:06.142000 audit: BPF prog-id=39 op=UNLOAD Dec 16 02:09:06.143000 audit: BPF prog-id=46 op=LOAD Dec 16 02:09:06.143000 audit: BPF prog-id=40 op=UNLOAD Dec 16 02:09:06.143000 audit: BPF prog-id=47 op=LOAD Dec 16 02:09:06.143000 audit: BPF prog-id=48 op=LOAD Dec 16 02:09:06.143000 audit: BPF prog-id=41 op=UNLOAD Dec 16 02:09:06.143000 audit: BPF prog-id=42 op=UNLOAD Dec 16 02:09:06.143000 audit: BPF prog-id=49 op=LOAD Dec 16 02:09:06.144000 audit: BPF prog-id=30 op=UNLOAD Dec 16 02:09:06.144000 audit: BPF prog-id=50 op=LOAD Dec 16 02:09:06.144000 audit: BPF prog-id=51 op=LOAD Dec 16 02:09:06.144000 audit: BPF prog-id=31 op=UNLOAD Dec 16 02:09:06.144000 audit: BPF prog-id=32 op=UNLOAD Dec 16 02:09:06.144000 audit: BPF prog-id=52 op=LOAD Dec 16 02:09:06.144000 audit: BPF prog-id=36 op=UNLOAD Dec 16 02:09:06.145000 audit: BPF prog-id=53 op=LOAD Dec 16 02:09:06.152991 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Dec 16 02:09:06.153933 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 02:09:06.153967 kernel: [drm] features: -context_init Dec 16 02:09:06.156000 audit: BPF prog-id=33 op=UNLOAD Dec 16 02:09:06.156000 audit: BPF prog-id=54 op=LOAD Dec 16 02:09:06.156000 audit: BPF prog-id=55 op=LOAD Dec 16 02:09:06.156000 audit: BPF prog-id=34 op=UNLOAD Dec 16 02:09:06.156000 audit: BPF prog-id=35 op=UNLOAD Dec 16 02:09:06.156000 audit: BPF prog-id=56 op=LOAD Dec 16 02:09:06.156000 audit: BPF prog-id=57 op=LOAD Dec 16 02:09:06.158333 kernel: [drm] number of scanouts: 1 Dec 16 02:09:06.158356 kernel: [drm] number of cap sets: 0 Dec 16 02:09:06.156000 audit: BPF prog-id=28 op=UNLOAD Dec 16 02:09:06.156000 audit: BPF prog-id=29 op=UNLOAD Dec 16 02:09:06.159949 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Dec 16 02:09:06.160480 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:09:06.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.164910 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 02:09:06.173765 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 02:09:06.183927 systemd[1]: Finished ensure-sysext.service. Dec 16 02:09:06.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.200974 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:09:06.203311 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 02:09:06.204587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:09:06.228104 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:09:06.230140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:09:06.234021 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:09:06.237795 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:09:06.241422 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Dec 16 02:09:06.243101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:09:06.243229 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:09:06.244310 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 02:09:06.246698 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 02:09:06.248059 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:09:06.252684 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 02:09:06.252759 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 02:09:06.251539 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 02:09:06.256914 kernel: PTP clock support registered Dec 16 02:09:06.256000 audit: BPF prog-id=58 op=LOAD Dec 16 02:09:06.257638 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:09:06.258671 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 02:09:06.269357 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 02:09:06.274036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:09:06.276391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:09:06.276659 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:09:06.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.278615 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:09:06.278819 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:09:06.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.280267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:09:06.280466 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:09:06.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.282343 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:09:06.287068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:09:06.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.289042 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Dec 16 02:09:06.289247 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Dec 16 02:09:06.290797 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 02:09:06.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.293000 audit[1583]: SYSTEM_BOOT pid=1583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 02:09:06.302141 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 02:09:06.302000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 02:09:06.302000 audit[1603]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdae762e0 a2=420 a3=0 items=0 ppid=1562 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:06.302000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:09:06.303589 augenrules[1603]: No rules Dec 16 02:09:06.305995 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:09:06.315311 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:09:06.321036 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 02:09:06.324008 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:09:06.324084 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:09:06.367451 systemd-networkd[1582]: lo: Link UP Dec 16 02:09:06.367461 systemd-networkd[1582]: lo: Gained carrier Dec 16 02:09:06.368928 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:09:06.369121 systemd-networkd[1582]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:09:06.369131 systemd-networkd[1582]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:09:06.369792 systemd-networkd[1582]: eth0: Link UP Dec 16 02:09:06.369957 systemd-networkd[1582]: eth0: Gained carrier Dec 16 02:09:06.369972 systemd-networkd[1582]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:09:06.370183 systemd[1]: Reached target network.target - Network. Dec 16 02:09:06.372524 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 02:09:06.374627 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 02:09:06.384130 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 02:09:06.385668 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 02:09:06.389026 systemd-networkd[1582]: eth0: DHCPv4 address 10.0.26.207/25, gateway 10.0.26.129 acquired from 10.0.26.129 Dec 16 02:09:06.393093 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 02:09:06.450080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:09:06.999573 ldconfig[1575]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 02:09:07.003841 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 02:09:07.007375 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 02:09:07.040891 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 02:09:07.042145 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:09:07.043195 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 02:09:07.044280 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 02:09:07.045519 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 02:09:07.046722 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 02:09:07.047918 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 02:09:07.049033 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 02:09:07.049972 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 02:09:07.051072 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 02:09:07.051103 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:09:07.051856 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:09:07.053281 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 02:09:07.055420 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 02:09:07.058550 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 02:09:07.059856 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 02:09:07.060921 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 02:09:07.066958 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 02:09:07.068122 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 02:09:07.069707 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 02:09:07.070776 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:09:07.071668 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:09:07.072554 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:09:07.072586 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:09:07.074986 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 02:09:07.076663 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 02:09:07.078811 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 02:09:07.080667 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 02:09:07.083988 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 02:09:07.085927 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 02:09:07.086977 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:07.088560 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 02:09:07.091027 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 02:09:07.093043 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 02:09:07.094808 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 02:09:07.098220 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 02:09:07.102075 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 02:09:07.107723 jq[1633]: false Dec 16 02:09:07.108190 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 02:09:07.109135 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 02:09:07.109531 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 02:09:07.110470 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 02:09:07.112490 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 02:09:07.116984 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 02:09:07.118517 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 02:09:07.120471 extend-filesystems[1635]: Found /dev/vda6 Dec 16 02:09:07.124505 chronyd[1627]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 02:09:07.125523 chronyd[1627]: Loaded seccomp filter (level 2) Dec 16 02:09:07.125831 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 02:09:07.126242 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 02:09:07.126461 extend-filesystems[1635]: Found /dev/vda9 Dec 16 02:09:07.129505 extend-filesystems[1635]: Checking size of /dev/vda9 Dec 16 02:09:07.129513 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 02:09:07.129764 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 02:09:07.132436 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 02:09:07.132991 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 02:09:07.135287 jq[1648]: true Dec 16 02:09:07.147823 tar[1658]: linux-arm64/LICENSE Dec 16 02:09:07.148113 tar[1658]: linux-arm64/helm Dec 16 02:09:07.148459 extend-filesystems[1635]: Resized partition /dev/vda9 Dec 16 02:09:07.155621 extend-filesystems[1680]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 02:09:07.157391 update_engine[1646]: I20251216 02:09:07.156373 1646 main.cc:92] Flatcar Update Engine starting Dec 16 02:09:07.161899 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Dec 16 02:09:07.165159 jq[1669]: true Dec 16 02:09:07.225937 systemd-logind[1644]: New seat seat0. Dec 16 02:09:07.228470 systemd-logind[1644]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 02:09:07.228496 systemd-logind[1644]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 16 02:09:07.228809 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 02:09:07.235257 dbus-daemon[1630]: [system] SELinux support is enabled Dec 16 02:09:07.235762 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 02:09:07.239999 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 02:09:07.240036 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 02:09:07.241130 dbus-daemon[1630]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 02:09:07.241540 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 02:09:07.241569 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 02:09:07.243541 systemd[1]: Started update-engine.service - Update Engine. Dec 16 02:09:07.244643 update_engine[1646]: I20251216 02:09:07.244582 1646 update_check_scheduler.cc:74] Next update check in 3m16s Dec 16 02:09:07.246842 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 02:09:07.333387 containerd[1662]: time="2025-12-16T02:09:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 02:09:07.479258 containerd[1662]: time="2025-12-16T02:09:07.477515880Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 02:09:07.335416 locksmithd[1705]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 02:09:07.385025 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 02:09:07.516641 containerd[1662]: time="2025-12-16T02:09:07.487610520Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.28µs" Dec 16 02:09:07.516641 containerd[1662]: time="2025-12-16T02:09:07.487644720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 02:09:07.516641 containerd[1662]: time="2025-12-16T02:09:07.487686040Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 02:09:07.516641 containerd[1662]: time="2025-12-16T02:09:07.487697480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 02:09:07.516776 containerd[1662]: time="2025-12-16T02:09:07.516739280Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 02:09:07.516798 containerd[1662]: time="2025-12-16T02:09:07.516774160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:09:07.517528 containerd[1662]: time="2025-12-16T02:09:07.516839520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:09:07.517528 containerd[1662]: time="2025-12-16T02:09:07.517140480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:09:07.517622 sshd_keygen[1652]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 02:09:07.517737 containerd[1662]: time="2025-12-16T02:09:07.517602120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:09:07.517771 containerd[1662]: time="2025-12-16T02:09:07.517621640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:09:07.517791 containerd[1662]: time="2025-12-16T02:09:07.517769520Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:09:07.517791 containerd[1662]: time="2025-12-16T02:09:07.517779840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:09:07.518693 containerd[1662]: time="2025-12-16T02:09:07.518229920Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:09:07.518693 containerd[1662]: time="2025-12-16T02:09:07.518259760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 02:09:07.518693 containerd[1662]: time="2025-12-16T02:09:07.518403680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 02:09:07.518947 containerd[1662]: time="2025-12-16T02:09:07.518915080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:09:07.519054 containerd[1662]: time="2025-12-16T02:09:07.519026800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:09:07.519054 containerd[1662]: time="2025-12-16T02:09:07.519046720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 02:09:07.519139 containerd[1662]: time="2025-12-16T02:09:07.519121000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 02:09:07.520106 containerd[1662]: time="2025-12-16T02:09:07.520063880Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 02:09:07.520888 containerd[1662]: time="2025-12-16T02:09:07.520164960Z" level=info msg="metadata content store policy set" policy=shared Dec 16 02:09:07.539337 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 02:09:07.542138 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 02:09:07.544027 systemd[1]: Started sshd@0-10.0.26.207:22-139.178.68.195:53744.service - OpenSSH per-connection server daemon (139.178.68.195:53744). Dec 16 02:09:07.561298 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 02:09:07.625670 bash[1704]: Updated "/home/core/.ssh/authorized_keys" Dec 16 02:09:07.561576 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 02:09:07.564685 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 02:09:07.575169 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 02:09:07.579073 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 02:09:07.582724 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 02:09:07.584165 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 02:09:07.596970 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 02:09:07.599855 systemd[1]: Starting sshkeys.service... Dec 16 02:09:07.621060 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 02:09:07.623559 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 02:09:07.643887 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:07.662989 containerd[1662]: time="2025-12-16T02:09:07.662936960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 02:09:07.663085 containerd[1662]: time="2025-12-16T02:09:07.663017160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:09:07.663127 containerd[1662]: time="2025-12-16T02:09:07.663105480Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:09:07.663127 containerd[1662]: time="2025-12-16T02:09:07.663122720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 02:09:07.664020 containerd[1662]: time="2025-12-16T02:09:07.663139240Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 02:09:07.664070 containerd[1662]: time="2025-12-16T02:09:07.664038640Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 02:09:07.664070 containerd[1662]: time="2025-12-16T02:09:07.664061600Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 02:09:07.664118 containerd[1662]: time="2025-12-16T02:09:07.664072840Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 02:09:07.664118 containerd[1662]: time="2025-12-16T02:09:07.664087480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 02:09:07.664118 containerd[1662]: time="2025-12-16T02:09:07.664099720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 02:09:07.664118 containerd[1662]: time="2025-12-16T02:09:07.664112680Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 02:09:07.664183 containerd[1662]: time="2025-12-16T02:09:07.664123360Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 02:09:07.664183 containerd[1662]: time="2025-12-16T02:09:07.664134040Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 02:09:07.664183 containerd[1662]: time="2025-12-16T02:09:07.664146480Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 02:09:07.664342 containerd[1662]: time="2025-12-16T02:09:07.664298920Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 02:09:07.664429 containerd[1662]: time="2025-12-16T02:09:07.664366880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 02:09:07.664429 containerd[1662]: time="2025-12-16T02:09:07.664412800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 02:09:07.664533 containerd[1662]: time="2025-12-16T02:09:07.664443840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 02:09:07.664533 containerd[1662]: time="2025-12-16T02:09:07.664477520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 02:09:07.664533 containerd[1662]: time="2025-12-16T02:09:07.664505960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 02:09:07.664591 containerd[1662]: time="2025-12-16T02:09:07.664555840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 02:09:07.664591 containerd[1662]: time="2025-12-16T02:09:07.664566960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 02:09:07.664591 containerd[1662]: time="2025-12-16T02:09:07.664577760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 02:09:07.664591 containerd[1662]: time="2025-12-16T02:09:07.664588280Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 02:09:07.664591 containerd[1662]: time="2025-12-16T02:09:07.664597960Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 02:09:07.664591 containerd[1662]: time="2025-12-16T02:09:07.664634120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 02:09:07.664935 containerd[1662]: time="2025-12-16T02:09:07.664676000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 02:09:07.664935 containerd[1662]: time="2025-12-16T02:09:07.664689480Z" level=info msg="Start snapshots syncer" Dec 16 02:09:07.664935 containerd[1662]: time="2025-12-16T02:09:07.664713760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 02:09:07.665006 containerd[1662]: time="2025-12-16T02:09:07.664957200Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 02:09:07.665006 containerd[1662]: time="2025-12-16T02:09:07.665002200Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 02:09:07.665208 containerd[1662]: time="2025-12-16T02:09:07.665041560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 02:09:07.665208 containerd[1662]: time="2025-12-16T02:09:07.665130920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 02:09:07.665208 containerd[1662]: time="2025-12-16T02:09:07.665150680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 02:09:07.665208 containerd[1662]: time="2025-12-16T02:09:07.665162080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 02:09:07.665208 containerd[1662]: time="2025-12-16T02:09:07.665171800Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 02:09:07.665208 containerd[1662]: time="2025-12-16T02:09:07.665183000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 02:09:07.665208 containerd[1662]: time="2025-12-16T02:09:07.665192680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 02:09:07.665208 containerd[1662]: time="2025-12-16T02:09:07.665203400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665219480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665231080Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665260480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665274280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665282640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665291600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665298680Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665307680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665316680Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665409760Z" level=info msg="runtime interface created" Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665414680Z" level=info msg="created NRI interface" Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665422320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665433000Z" level=info msg="Connect containerd service" Dec 16 02:09:07.665485 containerd[1662]: time="2025-12-16T02:09:07.665451360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 02:09:07.666194 containerd[1662]: time="2025-12-16T02:09:07.666160840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 02:09:07.745066 containerd[1662]: time="2025-12-16T02:09:07.744843160Z" level=info msg="Start subscribing containerd event" Dec 16 02:09:07.745066 containerd[1662]: time="2025-12-16T02:09:07.744938200Z" level=info msg="Start recovering state" Dec 16 02:09:07.745066 containerd[1662]: time="2025-12-16T02:09:07.745027320Z" level=info msg="Start event monitor" Dec 16 02:09:07.745066 containerd[1662]: time="2025-12-16T02:09:07.745040280Z" level=info msg="Start cni network conf syncer for default" Dec 16 02:09:07.745066 containerd[1662]: time="2025-12-16T02:09:07.745046880Z" level=info msg="Start streaming server" Dec 16 02:09:07.745066 containerd[1662]: time="2025-12-16T02:09:07.745055200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 02:09:07.745066 containerd[1662]: time="2025-12-16T02:09:07.745063120Z" level=info msg="runtime interface starting up..." Dec 16 02:09:07.745066 containerd[1662]: time="2025-12-16T02:09:07.745068800Z" level=info msg="starting plugins..." Dec 16 02:09:07.745066 containerd[1662]: time="2025-12-16T02:09:07.745081960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 02:09:07.745532 containerd[1662]: time="2025-12-16T02:09:07.745305000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 02:09:07.745532 containerd[1662]: time="2025-12-16T02:09:07.745369600Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 02:09:07.745609 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 02:09:07.746742 containerd[1662]: time="2025-12-16T02:09:07.746709840Z" level=info msg="containerd successfully booted in 0.413672s" Dec 16 02:09:07.853067 tar[1658]: linux-arm64/README.md Dec 16 02:09:07.873283 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 02:09:08.028343 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Dec 16 02:09:08.208220 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:08.211690 extend-filesystems[1680]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 02:09:08.211690 extend-filesystems[1680]: old_desc_blocks = 1, new_desc_blocks = 6 Dec 16 02:09:08.211690 extend-filesystems[1680]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Dec 16 02:09:08.217875 extend-filesystems[1635]: Resized filesystem in /dev/vda9 Dec 16 02:09:08.213372 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 02:09:08.213676 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 02:09:08.255144 systemd-networkd[1582]: eth0: Gained IPv6LL Dec 16 02:09:08.258508 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 02:09:08.260341 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 02:09:08.263039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:08.265340 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 02:09:08.296564 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 02:09:08.410976 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 53744 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:09:08.413911 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:08.421067 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 02:09:08.423376 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 02:09:08.429014 systemd-logind[1644]: New session 1 of user core. Dec 16 02:09:08.446897 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 02:09:08.451176 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 02:09:08.477274 (systemd)[1773]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:08.481161 systemd-logind[1644]: New session 2 of user core. Dec 16 02:09:08.589420 systemd[1773]: Queued start job for default target default.target. Dec 16 02:09:08.598138 systemd[1773]: Created slice app.slice - User Application Slice. Dec 16 02:09:08.598172 systemd[1773]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 02:09:08.598185 systemd[1773]: Reached target paths.target - Paths. Dec 16 02:09:08.598235 systemd[1773]: Reached target timers.target - Timers. Dec 16 02:09:08.599450 systemd[1773]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 02:09:08.600203 systemd[1773]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 02:09:08.610297 systemd[1773]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 02:09:08.611623 systemd[1773]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 02:09:08.611783 systemd[1773]: Reached target sockets.target - Sockets. Dec 16 02:09:08.611826 systemd[1773]: Reached target basic.target - Basic System. Dec 16 02:09:08.611854 systemd[1773]: Reached target default.target - Main User Target. Dec 16 02:09:08.611905 systemd[1773]: Startup finished in 125ms. Dec 16 02:09:08.612127 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 02:09:08.622345 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 02:09:08.653896 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:09.064513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:09.068777 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:09:09.107239 systemd[1]: Started sshd@1-10.0.26.207:22-139.178.68.195:53758.service - OpenSSH per-connection server daemon (139.178.68.195:53758). Dec 16 02:09:09.519133 kubelet[1792]: E1216 02:09:09.519014 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:09:09.521606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:09:09.521767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:09:09.522266 systemd[1]: kubelet.service: Consumed 697ms CPU time, 247.1M memory peak. Dec 16 02:09:09.915927 sshd[1794]: Accepted publickey for core from 139.178.68.195 port 53758 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:09:09.917208 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:09.922475 systemd-logind[1644]: New session 3 of user core. Dec 16 02:09:09.929123 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 02:09:10.116942 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:10.384843 sshd[1805]: Connection closed by 139.178.68.195 port 53758 Dec 16 02:09:10.384701 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Dec 16 02:09:10.389169 systemd[1]: sshd@1-10.0.26.207:22-139.178.68.195:53758.service: Deactivated successfully. Dec 16 02:09:10.390884 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 02:09:10.393908 systemd-logind[1644]: Session 3 logged out. Waiting for processes to exit. Dec 16 02:09:10.394963 systemd-logind[1644]: Removed session 3. Dec 16 02:09:10.558708 systemd[1]: Started sshd@2-10.0.26.207:22-139.178.68.195:34328.service - OpenSSH per-connection server daemon (139.178.68.195:34328). Dec 16 02:09:10.664889 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:11.379490 sshd[1812]: Accepted publickey for core from 139.178.68.195 port 34328 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:09:11.380830 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:11.385506 systemd-logind[1644]: New session 4 of user core. Dec 16 02:09:11.397194 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 02:09:11.854681 sshd[1817]: Connection closed by 139.178.68.195 port 34328 Dec 16 02:09:11.855271 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Dec 16 02:09:11.859746 systemd[1]: sshd@2-10.0.26.207:22-139.178.68.195:34328.service: Deactivated successfully. Dec 16 02:09:11.861562 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 02:09:11.864052 systemd-logind[1644]: Session 4 logged out. Waiting for processes to exit. Dec 16 02:09:11.864805 systemd-logind[1644]: Removed session 4. Dec 16 02:09:14.129880 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:14.136362 coreos-metadata[1629]: Dec 16 02:09:14.136 WARN failed to locate config-drive, using the metadata service API instead Dec 16 02:09:14.423320 coreos-metadata[1629]: Dec 16 02:09:14.423 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 02:09:14.672901 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 02:09:14.680176 coreos-metadata[1735]: Dec 16 02:09:14.679 WARN failed to locate config-drive, using the metadata service API instead Dec 16 02:09:14.692791 coreos-metadata[1629]: Dec 16 02:09:14.692 INFO Fetch successful Dec 16 02:09:14.693380 coreos-metadata[1629]: Dec 16 02:09:14.693 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 02:09:14.693705 coreos-metadata[1735]: Dec 16 02:09:14.693 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 02:09:14.940313 coreos-metadata[1629]: Dec 16 02:09:14.940 INFO Fetch successful Dec 16 02:09:14.940313 coreos-metadata[1629]: Dec 16 02:09:14.940 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 02:09:14.942770 coreos-metadata[1735]: Dec 16 02:09:14.942 INFO Fetch successful Dec 16 02:09:14.942952 coreos-metadata[1735]: Dec 16 02:09:14.942 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 02:09:15.188995 coreos-metadata[1629]: Dec 16 02:09:15.188 INFO Fetch successful Dec 16 02:09:15.189449 coreos-metadata[1629]: Dec 16 02:09:15.189 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 02:09:15.191210 coreos-metadata[1735]: Dec 16 02:09:15.191 INFO Fetch successful Dec 16 02:09:15.193352 unknown[1735]: wrote ssh authorized keys file for user: core Dec 16 02:09:15.221902 update-ssh-keys[1831]: Updated "/home/core/.ssh/authorized_keys" Dec 16 02:09:15.222518 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 02:09:15.224685 systemd[1]: Finished sshkeys.service. Dec 16 02:09:17.630367 coreos-metadata[1629]: Dec 16 02:09:17.630 INFO Fetch successful Dec 16 02:09:17.630694 coreos-metadata[1629]: Dec 16 02:09:17.630 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 02:09:17.788558 coreos-metadata[1629]: Dec 16 02:09:17.788 INFO Fetch successful Dec 16 02:09:17.788558 coreos-metadata[1629]: Dec 16 02:09:17.788 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 02:09:17.923516 coreos-metadata[1629]: Dec 16 02:09:17.923 INFO Fetch successful Dec 16 02:09:17.973983 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 02:09:17.974647 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 02:09:17.974788 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 02:09:17.977962 systemd[1]: Startup finished in 2.646s (kernel) + 12.428s (initrd) + 13.495s (userspace) = 28.570s. Dec 16 02:09:19.631344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 02:09:19.632846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:20.426694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:20.430463 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:09:20.692024 kubelet[1847]: E1216 02:09:20.691891 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:09:20.694812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:09:20.694984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:09:20.695541 systemd[1]: kubelet.service: Consumed 149ms CPU time, 108.2M memory peak. Dec 16 02:09:22.028815 systemd[1]: Started sshd@3-10.0.26.207:22-139.178.68.195:58784.service - OpenSSH per-connection server daemon (139.178.68.195:58784). Dec 16 02:09:22.858233 sshd[1857]: Accepted publickey for core from 139.178.68.195 port 58784 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:09:22.859561 sshd-session[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:22.863719 systemd-logind[1644]: New session 5 of user core. Dec 16 02:09:22.874353 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 02:09:23.332899 sshd[1861]: Connection closed by 139.178.68.195 port 58784 Dec 16 02:09:23.333493 sshd-session[1857]: pam_unix(sshd:session): session closed for user core Dec 16 02:09:23.337262 systemd[1]: sshd@3-10.0.26.207:22-139.178.68.195:58784.service: Deactivated successfully. Dec 16 02:09:23.340890 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 02:09:23.341892 systemd-logind[1644]: Session 5 logged out. Waiting for processes to exit. Dec 16 02:09:23.342761 systemd-logind[1644]: Removed session 5. Dec 16 02:09:23.504213 systemd[1]: Started sshd@4-10.0.26.207:22-139.178.68.195:58786.service - OpenSSH per-connection server daemon (139.178.68.195:58786). Dec 16 02:09:24.340051 sshd[1867]: Accepted publickey for core from 139.178.68.195 port 58786 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:09:24.341368 sshd-session[1867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:24.346259 systemd-logind[1644]: New session 6 of user core. Dec 16 02:09:24.358308 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 02:09:24.810193 sshd[1871]: Connection closed by 139.178.68.195 port 58786 Dec 16 02:09:24.810720 sshd-session[1867]: pam_unix(sshd:session): session closed for user core Dec 16 02:09:24.814694 systemd[1]: sshd@4-10.0.26.207:22-139.178.68.195:58786.service: Deactivated successfully. Dec 16 02:09:24.816300 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 02:09:24.817092 systemd-logind[1644]: Session 6 logged out. Waiting for processes to exit. Dec 16 02:09:24.819255 systemd-logind[1644]: Removed session 6. Dec 16 02:09:24.981254 systemd[1]: Started sshd@5-10.0.26.207:22-139.178.68.195:58798.service - OpenSSH per-connection server daemon (139.178.68.195:58798). Dec 16 02:09:25.824708 sshd[1877]: Accepted publickey for core from 139.178.68.195 port 58798 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:09:25.826093 sshd-session[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:25.829789 systemd-logind[1644]: New session 7 of user core. Dec 16 02:09:25.847118 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 02:09:26.299961 sshd[1881]: Connection closed by 139.178.68.195 port 58798 Dec 16 02:09:26.299849 sshd-session[1877]: pam_unix(sshd:session): session closed for user core Dec 16 02:09:26.303974 systemd[1]: sshd@5-10.0.26.207:22-139.178.68.195:58798.service: Deactivated successfully. Dec 16 02:09:26.305586 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 02:09:26.306440 systemd-logind[1644]: Session 7 logged out. Waiting for processes to exit. Dec 16 02:09:26.307378 systemd-logind[1644]: Removed session 7. Dec 16 02:09:26.468366 systemd[1]: Started sshd@6-10.0.26.207:22-139.178.68.195:58806.service - OpenSSH per-connection server daemon (139.178.68.195:58806). Dec 16 02:09:27.284373 sshd[1887]: Accepted publickey for core from 139.178.68.195 port 58806 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:09:27.285769 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:27.289726 systemd-logind[1644]: New session 8 of user core. Dec 16 02:09:27.306127 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 02:09:27.611768 sudo[1892]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 02:09:27.612043 sudo[1892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:09:27.625043 sudo[1892]: pam_unix(sudo:session): session closed for user root Dec 16 02:09:27.780931 sshd[1891]: Connection closed by 139.178.68.195 port 58806 Dec 16 02:09:27.781444 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Dec 16 02:09:27.786223 systemd[1]: sshd@6-10.0.26.207:22-139.178.68.195:58806.service: Deactivated successfully. Dec 16 02:09:27.787935 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 02:09:27.790702 systemd-logind[1644]: Session 8 logged out. Waiting for processes to exit. Dec 16 02:09:27.792148 systemd-logind[1644]: Removed session 8. Dec 16 02:09:27.952391 systemd[1]: Started sshd@7-10.0.26.207:22-139.178.68.195:58816.service - OpenSSH per-connection server daemon (139.178.68.195:58816). Dec 16 02:09:28.792913 sshd[1899]: Accepted publickey for core from 139.178.68.195 port 58816 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:09:28.794281 sshd-session[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:28.798031 systemd-logind[1644]: New session 9 of user core. Dec 16 02:09:28.809088 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 02:09:29.111656 sudo[1905]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 02:09:29.111949 sudo[1905]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:09:29.114362 sudo[1905]: pam_unix(sudo:session): session closed for user root Dec 16 02:09:29.120034 sudo[1904]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 02:09:29.120284 sudo[1904]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:09:29.127116 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:09:29.164000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:09:29.166489 kernel: kauditd_printk_skb: 184 callbacks suppressed Dec 16 02:09:29.166547 kernel: audit: type=1305 audit(1765850969.164:228): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:09:29.166727 augenrules[1929]: No rules Dec 16 02:09:29.164000 audit[1929]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd250a550 a2=420 a3=0 items=0 ppid=1910 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:29.169643 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:09:29.169958 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:09:29.171833 sudo[1904]: pam_unix(sudo:session): session closed for user root Dec 16 02:09:29.175256 kernel: audit: type=1300 audit(1765850969.164:228): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd250a550 a2=420 a3=0 items=0 ppid=1910 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:29.164000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:09:29.176931 kernel: audit: type=1327 audit(1765850969.164:228): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:09:29.176976 kernel: audit: type=1130 audit(1765850969.168:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.179167 kernel: audit: type=1131 audit(1765850969.168:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.168000 audit[1904]: USER_END pid=1904 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.183926 kernel: audit: type=1106 audit(1765850969.168:231): pid=1904 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.184028 kernel: audit: type=1104 audit(1765850969.168:232): pid=1904 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.168000 audit[1904]: CRED_DISP pid=1904 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.330536 sshd[1903]: Connection closed by 139.178.68.195 port 58816 Dec 16 02:09:29.330998 sshd-session[1899]: pam_unix(sshd:session): session closed for user core Dec 16 02:09:29.331000 audit[1899]: USER_END pid=1899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:09:29.334480 systemd[1]: sshd@7-10.0.26.207:22-139.178.68.195:58816.service: Deactivated successfully. Dec 16 02:09:29.336185 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 02:09:29.331000 audit[1899]: CRED_DISP pid=1899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:09:29.338885 kernel: audit: type=1106 audit(1765850969.331:233): pid=1899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:09:29.338935 kernel: audit: type=1104 audit(1765850969.331:234): pid=1899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:09:29.338954 kernel: audit: type=1131 audit(1765850969.334:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.26.207:22-139.178.68.195:58816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.26.207:22-139.178.68.195:58816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.340242 systemd-logind[1644]: Session 9 logged out. Waiting for processes to exit. Dec 16 02:09:29.341140 systemd-logind[1644]: Removed session 9. Dec 16 02:09:29.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.26.207:22-139.178.68.195:58822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:29.502794 systemd[1]: Started sshd@8-10.0.26.207:22-139.178.68.195:58822.service - OpenSSH per-connection server daemon (139.178.68.195:58822). Dec 16 02:09:30.326000 audit[1938]: USER_ACCT pid=1938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:09:30.327251 sshd[1938]: Accepted publickey for core from 139.178.68.195 port 58822 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:09:30.327000 audit[1938]: CRED_ACQ pid=1938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:09:30.327000 audit[1938]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf055e60 a2=3 a3=0 items=0 ppid=1 pid=1938 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:30.327000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:09:30.328493 sshd-session[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:09:30.332924 systemd-logind[1644]: New session 10 of user core. Dec 16 02:09:30.343161 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 02:09:30.344000 audit[1938]: USER_START pid=1938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:09:30.345000 audit[1942]: CRED_ACQ pid=1942 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:09:30.645000 audit[1943]: USER_ACCT pid=1943 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:30.646264 sudo[1943]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 02:09:30.645000 audit[1943]: CRED_REFR pid=1943 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:30.645000 audit[1943]: USER_START pid=1943 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:30.646527 sudo[1943]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:09:30.881285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 02:09:30.882705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:30.909934 chronyd[1627]: Selected source PHC0 Dec 16 02:09:32.129373 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 02:09:32.146102 (dockerd)[1968]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 02:09:32.233282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:32.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:32.236596 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:09:32.264930 kubelet[1974]: E1216 02:09:32.264874 1974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:09:32.267042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:09:32.267160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:09:32.268936 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.2M memory peak. Dec 16 02:09:32.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:09:32.722782 dockerd[1968]: time="2025-12-16T02:09:32.722696175Z" level=info msg="Starting up" Dec 16 02:09:32.723681 dockerd[1968]: time="2025-12-16T02:09:32.723649967Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 02:09:32.734206 dockerd[1968]: time="2025-12-16T02:09:32.734166801Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 02:09:32.768244 dockerd[1968]: time="2025-12-16T02:09:32.768176368Z" level=info msg="Loading containers: start." Dec 16 02:09:32.777880 kernel: Initializing XFRM netlink socket Dec 16 02:09:32.821000 audit[2034]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.821000 audit[2034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffeab92530 a2=0 a3=0 items=0 ppid=1968 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:09:32.823000 audit[2036]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.823000 audit[2036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd11c9d70 a2=0 a3=0 items=0 ppid=1968 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.823000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:09:32.825000 audit[2038]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.825000 audit[2038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5ddc9d0 a2=0 a3=0 items=0 ppid=1968 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:09:32.827000 audit[2040]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.827000 audit[2040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf87f970 a2=0 a3=0 items=0 ppid=1968 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.827000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:09:32.829000 audit[2042]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.829000 audit[2042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe290c240 a2=0 a3=0 items=0 ppid=1968 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:09:32.831000 audit[2044]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.831000 audit[2044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc909bc90 a2=0 a3=0 items=0 ppid=1968 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:09:32.832000 audit[2046]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.832000 audit[2046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd8860e90 a2=0 a3=0 items=0 ppid=1968 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:09:32.834000 audit[2048]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.834000 audit[2048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd5bfc880 a2=0 a3=0 items=0 ppid=1968 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:09:32.873000 audit[2051]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.873000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffff1f80870 a2=0 a3=0 items=0 ppid=1968 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 02:09:32.875000 audit[2053]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.875000 audit[2053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc200d510 a2=0 a3=0 items=0 ppid=1968 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:09:32.877000 audit[2055]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.877000 audit[2055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc99b6560 a2=0 a3=0 items=0 ppid=1968 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.877000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:09:32.879000 audit[2057]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.879000 audit[2057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc750ba80 a2=0 a3=0 items=0 ppid=1968 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:09:32.881000 audit[2059]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.881000 audit[2059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc1671710 a2=0 a3=0 items=0 ppid=1968 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.881000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:09:32.916000 audit[2089]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.916000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffec220eb0 a2=0 a3=0 items=0 ppid=1968 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.916000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:09:32.918000 audit[2091]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.918000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffdfe17ea0 a2=0 a3=0 items=0 ppid=1968 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:09:32.920000 audit[2093]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.920000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1026090 a2=0 a3=0 items=0 ppid=1968 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:09:32.921000 audit[2095]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.921000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5f80820 a2=0 a3=0 items=0 ppid=1968 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:09:32.923000 audit[2097]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.923000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee9d46d0 a2=0 a3=0 items=0 ppid=1968 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:09:32.925000 audit[2099]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.925000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdbede320 a2=0 a3=0 items=0 ppid=1968 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:09:32.926000 audit[2101]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.926000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffb5f6160 a2=0 a3=0 items=0 ppid=1968 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.926000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:09:32.928000 audit[2103]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.928000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc4c32c80 a2=0 a3=0 items=0 ppid=1968 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.928000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:09:32.930000 audit[2105]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.930000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffe5ca0a80 a2=0 a3=0 items=0 ppid=1968 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 02:09:32.932000 audit[2107]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.932000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffce7c8290 a2=0 a3=0 items=0 ppid=1968 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:09:32.934000 audit[2109]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.934000 audit[2109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd5d77920 a2=0 a3=0 items=0 ppid=1968 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:09:32.936000 audit[2111]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.936000 audit[2111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd6c1bea0 a2=0 a3=0 items=0 ppid=1968 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:09:32.937000 audit[2113]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.937000 audit[2113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd9263d20 a2=0 a3=0 items=0 ppid=1968 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:09:32.942000 audit[2118]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.942000 audit[2118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff41abc20 a2=0 a3=0 items=0 ppid=1968 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.942000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:09:32.944000 audit[2120]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.944000 audit[2120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff64cdae0 a2=0 a3=0 items=0 ppid=1968 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.944000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:09:32.946000 audit[2122]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.946000 audit[2122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffcfe64b0 a2=0 a3=0 items=0 ppid=1968 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:09:32.948000 audit[2124]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.948000 audit[2124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff8f9a990 a2=0 a3=0 items=0 ppid=1968 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.948000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:09:32.950000 audit[2126]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.950000 audit[2126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffed3d41a0 a2=0 a3=0 items=0 ppid=1968 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:09:32.951000 audit[2128]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:32.951000 audit[2128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcb976080 a2=0 a3=0 items=0 ppid=1968 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.951000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:09:32.976000 audit[2133]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.976000 audit[2133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffffe4b5c90 a2=0 a3=0 items=0 ppid=1968 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 02:09:32.978000 audit[2135]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.978000 audit[2135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffde5bfcf0 a2=0 a3=0 items=0 ppid=1968 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.978000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 02:09:32.986000 audit[2143]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.986000 audit[2143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffdf58aec0 a2=0 a3=0 items=0 ppid=1968 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 02:09:32.995000 audit[2149]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.995000 audit[2149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcd27cbb0 a2=0 a3=0 items=0 ppid=1968 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 02:09:32.997000 audit[2151]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.997000 audit[2151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffc8946ad0 a2=0 a3=0 items=0 ppid=1968 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 02:09:32.999000 audit[2153]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:32.999000 audit[2153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe2603280 a2=0 a3=0 items=0 ppid=1968 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:32.999000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 02:09:33.001000 audit[2155]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:33.001000 audit[2155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe7a222a0 a2=0 a3=0 items=0 ppid=1968 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:33.001000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:09:33.003000 audit[2157]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:33.003000 audit[2157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc8f63ea0 a2=0 a3=0 items=0 ppid=1968 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:33.003000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 02:09:33.005385 systemd-networkd[1582]: docker0: Link UP Dec 16 02:09:33.009649 dockerd[1968]: time="2025-12-16T02:09:33.009603435Z" level=info msg="Loading containers: done." Dec 16 02:09:33.022034 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck824826568-merged.mount: Deactivated successfully. Dec 16 02:09:33.030892 dockerd[1968]: time="2025-12-16T02:09:33.030552257Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 02:09:33.030892 dockerd[1968]: time="2025-12-16T02:09:33.030657825Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 02:09:33.030892 dockerd[1968]: time="2025-12-16T02:09:33.030823797Z" level=info msg="Initializing buildkit" Dec 16 02:09:33.051227 dockerd[1968]: time="2025-12-16T02:09:33.051193175Z" level=info msg="Completed buildkit initialization" Dec 16 02:09:33.057470 dockerd[1968]: time="2025-12-16T02:09:33.057435486Z" level=info msg="Daemon has completed initialization" Dec 16 02:09:33.057662 dockerd[1968]: time="2025-12-16T02:09:33.057614740Z" level=info msg="API listen on /run/docker.sock" Dec 16 02:09:33.057713 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 02:09:33.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:34.099662 containerd[1662]: time="2025-12-16T02:09:34.099476880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 02:09:34.916028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085143249.mount: Deactivated successfully. Dec 16 02:09:35.724856 containerd[1662]: time="2025-12-16T02:09:35.724810355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:35.725658 containerd[1662]: time="2025-12-16T02:09:35.725615918Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22974850" Dec 16 02:09:35.727086 containerd[1662]: time="2025-12-16T02:09:35.727033562Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:35.729847 containerd[1662]: time="2025-12-16T02:09:35.729796930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:35.731590 containerd[1662]: time="2025-12-16T02:09:35.731545976Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.631991174s" Dec 16 02:09:35.731760 containerd[1662]: time="2025-12-16T02:09:35.731675256Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 02:09:35.732374 containerd[1662]: time="2025-12-16T02:09:35.732339018Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 02:09:37.042847 containerd[1662]: time="2025-12-16T02:09:37.042789112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:37.043741 containerd[1662]: time="2025-12-16T02:09:37.043694114Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Dec 16 02:09:37.044813 containerd[1662]: time="2025-12-16T02:09:37.044775238Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:37.048894 containerd[1662]: time="2025-12-16T02:09:37.048135888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:37.049022 containerd[1662]: time="2025-12-16T02:09:37.049000890Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.316617432s" Dec 16 02:09:37.049098 containerd[1662]: time="2025-12-16T02:09:37.049083610Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 02:09:37.049614 containerd[1662]: time="2025-12-16T02:09:37.049545172Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 02:09:37.861276 containerd[1662]: time="2025-12-16T02:09:37.861224168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:37.862136 containerd[1662]: time="2025-12-16T02:09:37.862092491Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Dec 16 02:09:37.862854 containerd[1662]: time="2025-12-16T02:09:37.862824973Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:37.866148 containerd[1662]: time="2025-12-16T02:09:37.866102063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:37.867832 containerd[1662]: time="2025-12-16T02:09:37.867425627Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 817.809615ms" Dec 16 02:09:37.867832 containerd[1662]: time="2025-12-16T02:09:37.867459387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 02:09:37.868219 containerd[1662]: time="2025-12-16T02:09:37.868193869Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 02:09:38.704101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976224467.mount: Deactivated successfully. Dec 16 02:09:38.887801 containerd[1662]: time="2025-12-16T02:09:38.887739850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:38.889298 containerd[1662]: time="2025-12-16T02:09:38.889248374Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Dec 16 02:09:38.890309 containerd[1662]: time="2025-12-16T02:09:38.890278177Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:38.892443 containerd[1662]: time="2025-12-16T02:09:38.892388584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:38.892827 containerd[1662]: time="2025-12-16T02:09:38.892782505Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.024478355s" Dec 16 02:09:38.892827 containerd[1662]: time="2025-12-16T02:09:38.892820745Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 02:09:38.893975 containerd[1662]: time="2025-12-16T02:09:38.893951188Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 02:09:39.530962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027199462.mount: Deactivated successfully. Dec 16 02:09:40.051476 containerd[1662]: time="2025-12-16T02:09:40.051398060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:40.052911 containerd[1662]: time="2025-12-16T02:09:40.052829984Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19575910" Dec 16 02:09:40.053994 containerd[1662]: time="2025-12-16T02:09:40.053951987Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:40.057107 containerd[1662]: time="2025-12-16T02:09:40.057060477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:40.058800 containerd[1662]: time="2025-12-16T02:09:40.058748762Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.164691493s" Dec 16 02:09:40.058837 containerd[1662]: time="2025-12-16T02:09:40.058799282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 02:09:40.059396 containerd[1662]: time="2025-12-16T02:09:40.059255443Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 02:09:40.571132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201686890.mount: Deactivated successfully. Dec 16 02:09:40.577297 containerd[1662]: time="2025-12-16T02:09:40.577256078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:40.578299 containerd[1662]: time="2025-12-16T02:09:40.578246081Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 16 02:09:40.579492 containerd[1662]: time="2025-12-16T02:09:40.579437725Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:40.582890 containerd[1662]: time="2025-12-16T02:09:40.582182973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:40.582890 containerd[1662]: time="2025-12-16T02:09:40.582838935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 523.508611ms" Dec 16 02:09:40.583040 containerd[1662]: time="2025-12-16T02:09:40.583011496Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 02:09:40.583667 containerd[1662]: time="2025-12-16T02:09:40.583640297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 02:09:41.137229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3996588988.mount: Deactivated successfully. Dec 16 02:09:42.380855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 02:09:42.382292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:43.297737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:43.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:43.298993 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 02:09:43.299048 kernel: audit: type=1130 audit(1765850983.297:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:43.301895 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:09:43.510977 kubelet[2383]: E1216 02:09:43.510924 2383 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:09:43.513343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:09:43.513476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:09:43.513834 systemd[1]: kubelet.service: Consumed 145ms CPU time, 109.3M memory peak. Dec 16 02:09:43.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:09:43.516886 kernel: audit: type=1131 audit(1765850983.512:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:09:44.251014 containerd[1662]: time="2025-12-16T02:09:44.250932706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:44.251929 containerd[1662]: time="2025-12-16T02:09:44.251884149Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=85821047" Dec 16 02:09:44.254494 containerd[1662]: time="2025-12-16T02:09:44.254441236Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:44.257434 containerd[1662]: time="2025-12-16T02:09:44.257379565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:44.258627 containerd[1662]: time="2025-12-16T02:09:44.258459008Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.67478399s" Dec 16 02:09:44.258627 containerd[1662]: time="2025-12-16T02:09:44.258493649Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 02:09:48.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:48.842202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:48.842357 systemd[1]: kubelet.service: Consumed 145ms CPU time, 109.3M memory peak. Dec 16 02:09:48.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:48.845273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:48.847161 kernel: audit: type=1130 audit(1765850988.841:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:48.847230 kernel: audit: type=1131 audit(1765850988.841:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:48.869149 systemd[1]: Reload requested from client PID 2429 ('systemctl') (unit session-10.scope)... Dec 16 02:09:48.869173 systemd[1]: Reloading... Dec 16 02:09:48.931996 zram_generator::config[2475]: No configuration found. Dec 16 02:09:49.123040 systemd[1]: Reloading finished in 253 ms. Dec 16 02:09:49.145000 audit: BPF prog-id=63 op=LOAD Dec 16 02:09:49.146000 audit: BPF prog-id=58 op=UNLOAD Dec 16 02:09:49.148434 kernel: audit: type=1334 audit(1765850989.145:292): prog-id=63 op=LOAD Dec 16 02:09:49.148471 kernel: audit: type=1334 audit(1765850989.146:293): prog-id=58 op=UNLOAD Dec 16 02:09:49.148546 kernel: audit: type=1334 audit(1765850989.147:294): prog-id=64 op=LOAD Dec 16 02:09:49.147000 audit: BPF prog-id=64 op=LOAD Dec 16 02:09:49.150914 kernel: audit: type=1334 audit(1765850989.148:295): prog-id=65 op=LOAD Dec 16 02:09:49.151006 kernel: audit: type=1334 audit(1765850989.148:296): prog-id=56 op=UNLOAD Dec 16 02:09:49.151028 kernel: audit: type=1334 audit(1765850989.148:297): prog-id=57 op=UNLOAD Dec 16 02:09:49.148000 audit: BPF prog-id=65 op=LOAD Dec 16 02:09:49.148000 audit: BPF prog-id=56 op=UNLOAD Dec 16 02:09:49.148000 audit: BPF prog-id=57 op=UNLOAD Dec 16 02:09:49.149000 audit: BPF prog-id=66 op=LOAD Dec 16 02:09:49.152524 kernel: audit: type=1334 audit(1765850989.149:298): prog-id=66 op=LOAD Dec 16 02:09:49.152568 kernel: audit: type=1334 audit(1765850989.149:299): prog-id=43 op=UNLOAD Dec 16 02:09:49.149000 audit: BPF prog-id=43 op=UNLOAD Dec 16 02:09:49.150000 audit: BPF prog-id=67 op=LOAD Dec 16 02:09:49.150000 audit: BPF prog-id=68 op=LOAD Dec 16 02:09:49.150000 audit: BPF prog-id=44 op=UNLOAD Dec 16 02:09:49.150000 audit: BPF prog-id=45 op=UNLOAD Dec 16 02:09:49.152000 audit: BPF prog-id=69 op=LOAD Dec 16 02:09:49.158000 audit: BPF prog-id=53 op=UNLOAD Dec 16 02:09:49.158000 audit: BPF prog-id=70 op=LOAD Dec 16 02:09:49.158000 audit: BPF prog-id=71 op=LOAD Dec 16 02:09:49.158000 audit: BPF prog-id=54 op=UNLOAD Dec 16 02:09:49.158000 audit: BPF prog-id=55 op=UNLOAD Dec 16 02:09:49.159000 audit: BPF prog-id=72 op=LOAD Dec 16 02:09:49.159000 audit: BPF prog-id=59 op=UNLOAD Dec 16 02:09:49.160000 audit: BPF prog-id=73 op=LOAD Dec 16 02:09:49.160000 audit: BPF prog-id=52 op=UNLOAD Dec 16 02:09:49.161000 audit: BPF prog-id=74 op=LOAD Dec 16 02:09:49.161000 audit: BPF prog-id=60 op=UNLOAD Dec 16 02:09:49.161000 audit: BPF prog-id=75 op=LOAD Dec 16 02:09:49.161000 audit: BPF prog-id=76 op=LOAD Dec 16 02:09:49.161000 audit: BPF prog-id=61 op=UNLOAD Dec 16 02:09:49.161000 audit: BPF prog-id=62 op=UNLOAD Dec 16 02:09:49.162000 audit: BPF prog-id=77 op=LOAD Dec 16 02:09:49.162000 audit: BPF prog-id=49 op=UNLOAD Dec 16 02:09:49.162000 audit: BPF prog-id=78 op=LOAD Dec 16 02:09:49.162000 audit: BPF prog-id=79 op=LOAD Dec 16 02:09:49.162000 audit: BPF prog-id=50 op=UNLOAD Dec 16 02:09:49.162000 audit: BPF prog-id=51 op=UNLOAD Dec 16 02:09:49.163000 audit: BPF prog-id=80 op=LOAD Dec 16 02:09:49.163000 audit: BPF prog-id=46 op=UNLOAD Dec 16 02:09:49.163000 audit: BPF prog-id=81 op=LOAD Dec 16 02:09:49.163000 audit: BPF prog-id=82 op=LOAD Dec 16 02:09:49.163000 audit: BPF prog-id=47 op=UNLOAD Dec 16 02:09:49.163000 audit: BPF prog-id=48 op=UNLOAD Dec 16 02:09:49.192384 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 02:09:49.192472 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 02:09:49.192783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:49.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:09:49.192845 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.4M memory peak. Dec 16 02:09:49.194386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:50.228845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:50.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:50.232901 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:09:50.270156 kubelet[2523]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:09:50.270156 kubelet[2523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:09:50.270986 kubelet[2523]: I1216 02:09:50.270937 2523 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:09:50.854248 kubelet[2523]: I1216 02:09:50.854156 2523 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 02:09:50.854248 kubelet[2523]: I1216 02:09:50.854183 2523 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:09:50.856011 kubelet[2523]: I1216 02:09:50.855991 2523 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 02:09:50.856089 kubelet[2523]: I1216 02:09:50.856077 2523 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:09:50.856371 kubelet[2523]: I1216 02:09:50.856356 2523 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 02:09:50.863668 kubelet[2523]: E1216 02:09:50.863622 2523 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.26.207:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.26.207:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 02:09:50.864478 kubelet[2523]: I1216 02:09:50.864456 2523 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:09:50.867919 kubelet[2523]: I1216 02:09:50.867892 2523 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:09:50.870600 kubelet[2523]: I1216 02:09:50.870545 2523 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 02:09:50.870775 kubelet[2523]: I1216 02:09:50.870736 2523 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:09:50.870914 kubelet[2523]: I1216 02:09:50.870762 2523 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-9-b4376e68e3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:09:50.870997 kubelet[2523]: I1216 02:09:50.870915 2523 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:09:50.870997 kubelet[2523]: I1216 02:09:50.870925 2523 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 02:09:50.871048 kubelet[2523]: I1216 02:09:50.871029 2523 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 02:09:50.873948 kubelet[2523]: I1216 02:09:50.873919 2523 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:09:50.875465 kubelet[2523]: I1216 02:09:50.875425 2523 kubelet.go:475] "Attempting to sync node with API server" Dec 16 02:09:50.875465 kubelet[2523]: I1216 02:09:50.875448 2523 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:09:50.876245 kubelet[2523]: I1216 02:09:50.876190 2523 kubelet.go:387] "Adding apiserver pod source" Dec 16 02:09:50.876245 kubelet[2523]: I1216 02:09:50.876230 2523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:09:50.876356 kubelet[2523]: E1216 02:09:50.876306 2523 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.26.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-9-b4376e68e3&limit=500&resourceVersion=0\": dial tcp 10.0.26.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 02:09:50.878402 kubelet[2523]: E1216 02:09:50.878306 2523 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.26.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.26.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 02:09:50.878946 kubelet[2523]: I1216 02:09:50.878506 2523 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:09:50.879313 kubelet[2523]: I1216 02:09:50.879297 2523 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 02:09:50.879391 kubelet[2523]: I1216 02:09:50.879381 2523 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 02:09:50.879476 kubelet[2523]: W1216 02:09:50.879466 2523 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 02:09:50.883263 kubelet[2523]: I1216 02:09:50.883234 2523 server.go:1262] "Started kubelet" Dec 16 02:09:50.883885 kubelet[2523]: I1216 02:09:50.883519 2523 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:09:50.885629 kubelet[2523]: I1216 02:09:50.885031 2523 server.go:310] "Adding debug handlers to kubelet server" Dec 16 02:09:50.886275 kubelet[2523]: I1216 02:09:50.886147 2523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:09:50.887578 kubelet[2523]: I1216 02:09:50.887542 2523 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:09:50.887765 kubelet[2523]: I1216 02:09:50.887704 2523 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:09:50.887811 kubelet[2523]: I1216 02:09:50.887794 2523 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 02:09:50.888404 kubelet[2523]: I1216 02:09:50.888378 2523 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:09:50.888496 kubelet[2523]: E1216 02:09:50.888480 2523 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:50.888523 kubelet[2523]: I1216 02:09:50.888509 2523 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 02:09:50.889956 kubelet[2523]: I1216 02:09:50.888732 2523 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 02:09:50.889956 kubelet[2523]: I1216 02:09:50.888823 2523 reconciler.go:29] "Reconciler: start to sync state" Dec 16 02:09:50.889956 kubelet[2523]: E1216 02:09:50.887939 2523 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.26.207:6443/api/v1/namespaces/default/events\": dial tcp 10.0.26.207:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-9-b4376e68e3.1881901ecddd3f54 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-9-b4376e68e3,UID:ci-4547-0-0-9-b4376e68e3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-9-b4376e68e3,},FirstTimestamp:2025-12-16 02:09:50.88319266 +0000 UTC m=+0.647570865,LastTimestamp:2025-12-16 02:09:50.88319266 +0000 UTC m=+0.647570865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-9-b4376e68e3,}" Dec 16 02:09:50.889956 kubelet[2523]: E1216 02:09:50.889574 2523 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.26.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.26.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 02:09:50.889956 kubelet[2523]: I1216 02:09:50.889753 2523 factory.go:223] Registration of the systemd container factory successfully Dec 16 02:09:50.889956 kubelet[2523]: I1216 02:09:50.889836 2523 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:09:50.890372 kubelet[2523]: E1216 02:09:50.889830 2523 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.26.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-9-b4376e68e3?timeout=10s\": dial tcp 10.0.26.207:6443: connect: connection refused" interval="200ms" Dec 16 02:09:50.891290 kubelet[2523]: E1216 02:09:50.891262 2523 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:09:50.890000 audit[2540]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:50.890000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdd564f80 a2=0 a3=0 items=0 ppid=2523 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:09:50.891603 kubelet[2523]: I1216 02:09:50.891484 2523 factory.go:223] Registration of the containerd container factory successfully Dec 16 02:09:50.891000 audit[2541]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:50.891000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecc099e0 a2=0 a3=0 items=0 ppid=2523 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:09:50.893000 audit[2543]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:50.893000 audit[2543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffc7af6e0 a2=0 a3=0 items=0 ppid=2523 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:09:50.895000 audit[2545]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:50.895000 audit[2545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe0b8c620 a2=0 a3=0 items=0 ppid=2523 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:09:50.900048 kubelet[2523]: I1216 02:09:50.900026 2523 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:09:50.900171 kubelet[2523]: I1216 02:09:50.900154 2523 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:09:50.900235 kubelet[2523]: I1216 02:09:50.900226 2523 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:09:50.903170 kubelet[2523]: I1216 02:09:50.902759 2523 policy_none.go:49] "None policy: Start" Dec 16 02:09:50.903170 kubelet[2523]: I1216 02:09:50.902919 2523 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 02:09:50.903170 kubelet[2523]: I1216 02:09:50.902937 2523 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 02:09:50.902000 audit[2548]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:50.902000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe482b630 a2=0 a3=0 items=0 ppid=2523 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.902000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 02:09:50.904291 kubelet[2523]: I1216 02:09:50.904254 2523 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 02:09:50.905113 kubelet[2523]: I1216 02:09:50.904478 2523 policy_none.go:47] "Start" Dec 16 02:09:50.904000 audit[2553]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:50.904000 audit[2553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd2bcf3a0 a2=0 a3=0 items=0 ppid=2523 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:09:50.905383 kubelet[2523]: I1216 02:09:50.905361 2523 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 02:09:50.905417 kubelet[2523]: I1216 02:09:50.905387 2523 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 02:09:50.905436 kubelet[2523]: I1216 02:09:50.905421 2523 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 02:09:50.905473 kubelet[2523]: E1216 02:09:50.905457 2523 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:09:50.905000 audit[2554]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:50.905000 audit[2554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd060e0e0 a2=0 a3=0 items=0 ppid=2523 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:09:50.906470 kubelet[2523]: E1216 02:09:50.906439 2523 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.26.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.26.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 02:09:50.908000 audit[2555]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:50.908000 audit[2555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc9472f0 a2=0 a3=0 items=0 ppid=2523 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:09:50.909000 audit[2556]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:50.909000 audit[2556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeef94990 a2=0 a3=0 items=0 ppid=2523 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:09:50.909000 audit[2557]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:50.909000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcdadab70 a2=0 a3=0 items=0 ppid=2523 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:09:50.911427 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 02:09:50.910000 audit[2558]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:50.910000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffce767bf0 a2=0 a3=0 items=0 ppid=2523 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:09:50.911000 audit[2559]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:50.911000 audit[2559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8ae1060 a2=0 a3=0 items=0 ppid=2523 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:50.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:09:50.936172 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 02:09:50.939646 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 02:09:50.959507 kubelet[2523]: E1216 02:09:50.959466 2523 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 02:09:50.959682 kubelet[2523]: I1216 02:09:50.959642 2523 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:09:50.959682 kubelet[2523]: I1216 02:09:50.959652 2523 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:09:50.960085 kubelet[2523]: I1216 02:09:50.959990 2523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:09:50.961050 kubelet[2523]: E1216 02:09:50.961032 2523 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:09:50.961173 kubelet[2523]: E1216 02:09:50.961159 2523 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:51.016922 systemd[1]: Created slice kubepods-burstable-podc459e627bb903b9316ec4cb6fa514cf6.slice - libcontainer container kubepods-burstable-podc459e627bb903b9316ec4cb6fa514cf6.slice. Dec 16 02:09:51.038805 kubelet[2523]: E1216 02:09:51.038623 2523 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.042164 systemd[1]: Created slice kubepods-burstable-pod220048cf79f702d12af4e5d47588f108.slice - libcontainer container kubepods-burstable-pod220048cf79f702d12af4e5d47588f108.slice. Dec 16 02:09:51.044159 kubelet[2523]: E1216 02:09:51.044118 2523 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.060193 systemd[1]: Created slice kubepods-burstable-pod52130e8df637ebe19ae48b61ae8db0a3.slice - libcontainer container kubepods-burstable-pod52130e8df637ebe19ae48b61ae8db0a3.slice. Dec 16 02:09:51.061951 kubelet[2523]: I1216 02:09:51.061495 2523 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.062173 kubelet[2523]: E1216 02:09:51.062147 2523 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.26.207:6443/api/v1/nodes\": dial tcp 10.0.26.207:6443: connect: connection refused" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.062331 kubelet[2523]: E1216 02:09:51.062304 2523 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.089589 kubelet[2523]: I1216 02:09:51.089554 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.089589 kubelet[2523]: I1216 02:09:51.089589 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.089785 kubelet[2523]: I1216 02:09:51.089607 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/220048cf79f702d12af4e5d47588f108-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-9-b4376e68e3\" (UID: \"220048cf79f702d12af4e5d47588f108\") " pod="kube-system/kube-scheduler-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.089785 kubelet[2523]: I1216 02:09:51.089622 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52130e8df637ebe19ae48b61ae8db0a3-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-9-b4376e68e3\" (UID: \"52130e8df637ebe19ae48b61ae8db0a3\") " pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.089785 kubelet[2523]: I1216 02:09:51.089660 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52130e8df637ebe19ae48b61ae8db0a3-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-9-b4376e68e3\" (UID: \"52130e8df637ebe19ae48b61ae8db0a3\") " pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.089785 kubelet[2523]: I1216 02:09:51.089674 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52130e8df637ebe19ae48b61ae8db0a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-9-b4376e68e3\" (UID: \"52130e8df637ebe19ae48b61ae8db0a3\") " pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.089785 kubelet[2523]: I1216 02:09:51.089691 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.090027 kubelet[2523]: I1216 02:09:51.089705 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.090027 kubelet[2523]: I1216 02:09:51.089718 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.090910 kubelet[2523]: E1216 02:09:51.090856 2523 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.26.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-9-b4376e68e3?timeout=10s\": dial tcp 10.0.26.207:6443: connect: connection refused" interval="400ms" Dec 16 02:09:51.264190 kubelet[2523]: I1216 02:09:51.264132 2523 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.264470 kubelet[2523]: E1216 02:09:51.264425 2523 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.26.207:6443/api/v1/nodes\": dial tcp 10.0.26.207:6443: connect: connection refused" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.343106 containerd[1662]: time="2025-12-16T02:09:51.342922000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-9-b4376e68e3,Uid:c459e627bb903b9316ec4cb6fa514cf6,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:51.347707 containerd[1662]: time="2025-12-16T02:09:51.347576054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-9-b4376e68e3,Uid:220048cf79f702d12af4e5d47588f108,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:51.364794 containerd[1662]: time="2025-12-16T02:09:51.364741945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-9-b4376e68e3,Uid:52130e8df637ebe19ae48b61ae8db0a3,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:51.492341 kubelet[2523]: E1216 02:09:51.492291 2523 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.26.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-9-b4376e68e3?timeout=10s\": dial tcp 10.0.26.207:6443: connect: connection refused" interval="800ms" Dec 16 02:09:51.667322 kubelet[2523]: I1216 02:09:51.667039 2523 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.667446 kubelet[2523]: E1216 02:09:51.667385 2523 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.26.207:6443/api/v1/nodes\": dial tcp 10.0.26.207:6443: connect: connection refused" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:51.724681 kubelet[2523]: E1216 02:09:51.724560 2523 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.26.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.26.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 02:09:51.849841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021287906.mount: Deactivated successfully. Dec 16 02:09:51.856267 containerd[1662]: time="2025-12-16T02:09:51.856176940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:09:51.859090 containerd[1662]: time="2025-12-16T02:09:51.859041949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=886" Dec 16 02:09:51.861166 containerd[1662]: time="2025-12-16T02:09:51.861097675Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:09:51.862285 containerd[1662]: time="2025-12-16T02:09:51.862237438Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:09:51.867453 containerd[1662]: time="2025-12-16T02:09:51.867398894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 02:09:51.868083 containerd[1662]: time="2025-12-16T02:09:51.868054416Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:09:51.868950 containerd[1662]: time="2025-12-16T02:09:51.868916258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:09:51.869885 containerd[1662]: time="2025-12-16T02:09:51.869723621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 521.767526ms" Dec 16 02:09:51.870166 containerd[1662]: time="2025-12-16T02:09:51.870116982Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 02:09:51.872973 containerd[1662]: time="2025-12-16T02:09:51.872922430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 522.402888ms" Dec 16 02:09:51.875463 containerd[1662]: time="2025-12-16T02:09:51.875434238Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 508.655367ms" Dec 16 02:09:51.886603 containerd[1662]: time="2025-12-16T02:09:51.886558071Z" level=info msg="connecting to shim 2f5cf6e659e9f1c5ea3b30c006dbd2e0ad5aa547f551aec55c6039366e0277a8" address="unix:///run/containerd/s/3356d855ad9b0823631a7107113ef91c5e91137149c365af316aef822a95ecc7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:51.906421 containerd[1662]: time="2025-12-16T02:09:51.906376651Z" level=info msg="connecting to shim ea4f0c2ce76a1168810e559aff8bf209a24d6e4775567aeedf12e3986f0d197d" address="unix:///run/containerd/s/7491a6170c01437196681a00122c4c2fdcddba3fc842c54413bbbab26e3df3d7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:51.910427 containerd[1662]: time="2025-12-16T02:09:51.910378423Z" level=info msg="connecting to shim ef66f420dc985adaf126e6876c02285f8aba1a2a16d1174a32c956da17774227" address="unix:///run/containerd/s/6bc3810259378e32d889803bf3fbf61823e97c10a84e4728e6b0482f0d5e2f73" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:51.913076 systemd[1]: Started cri-containerd-2f5cf6e659e9f1c5ea3b30c006dbd2e0ad5aa547f551aec55c6039366e0277a8.scope - libcontainer container 2f5cf6e659e9f1c5ea3b30c006dbd2e0ad5aa547f551aec55c6039366e0277a8. Dec 16 02:09:51.946210 systemd[1]: Started cri-containerd-ea4f0c2ce76a1168810e559aff8bf209a24d6e4775567aeedf12e3986f0d197d.scope - libcontainer container ea4f0c2ce76a1168810e559aff8bf209a24d6e4775567aeedf12e3986f0d197d. Dec 16 02:09:51.948000 audit: BPF prog-id=83 op=LOAD Dec 16 02:09:51.949000 audit: BPF prog-id=84 op=LOAD Dec 16 02:09:51.949000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2572 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356366366536353965396631633565613362333063303036646264 Dec 16 02:09:51.949000 audit: BPF prog-id=84 op=UNLOAD Dec 16 02:09:51.949000 audit[2584]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356366366536353965396631633565613362333063303036646264 Dec 16 02:09:51.949000 audit: BPF prog-id=85 op=LOAD Dec 16 02:09:51.949000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2572 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356366366536353965396631633565613362333063303036646264 Dec 16 02:09:51.950000 audit: BPF prog-id=86 op=LOAD Dec 16 02:09:51.950000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2572 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356366366536353965396631633565613362333063303036646264 Dec 16 02:09:51.950000 audit: BPF prog-id=86 op=UNLOAD Dec 16 02:09:51.950000 audit[2584]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356366366536353965396631633565613362333063303036646264 Dec 16 02:09:51.950000 audit: BPF prog-id=85 op=UNLOAD Dec 16 02:09:51.950000 audit[2584]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356366366536353965396631633565613362333063303036646264 Dec 16 02:09:51.950000 audit: BPF prog-id=87 op=LOAD Dec 16 02:09:51.950000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2572 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356366366536353965396631633565613362333063303036646264 Dec 16 02:09:51.951243 systemd[1]: Started cri-containerd-ef66f420dc985adaf126e6876c02285f8aba1a2a16d1174a32c956da17774227.scope - libcontainer container ef66f420dc985adaf126e6876c02285f8aba1a2a16d1174a32c956da17774227. Dec 16 02:09:51.958000 audit: BPF prog-id=88 op=LOAD Dec 16 02:09:51.959000 audit: BPF prog-id=89 op=LOAD Dec 16 02:09:51.959000 audit[2637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2607 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561346630633263653736613131363838313065353539616666386266 Dec 16 02:09:51.959000 audit: BPF prog-id=89 op=UNLOAD Dec 16 02:09:51.959000 audit[2637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561346630633263653736613131363838313065353539616666386266 Dec 16 02:09:51.959000 audit: BPF prog-id=90 op=LOAD Dec 16 02:09:51.959000 audit[2637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2607 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561346630633263653736613131363838313065353539616666386266 Dec 16 02:09:51.959000 audit: BPF prog-id=91 op=LOAD Dec 16 02:09:51.959000 audit[2637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2607 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561346630633263653736613131363838313065353539616666386266 Dec 16 02:09:51.959000 audit: BPF prog-id=91 op=UNLOAD Dec 16 02:09:51.959000 audit[2637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561346630633263653736613131363838313065353539616666386266 Dec 16 02:09:51.959000 audit: BPF prog-id=90 op=UNLOAD Dec 16 02:09:51.959000 audit[2637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561346630633263653736613131363838313065353539616666386266 Dec 16 02:09:51.959000 audit: BPF prog-id=92 op=LOAD Dec 16 02:09:51.959000 audit[2637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2607 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561346630633263653736613131363838313065353539616666386266 Dec 16 02:09:51.962000 audit: BPF prog-id=93 op=LOAD Dec 16 02:09:51.962000 audit: BPF prog-id=94 op=LOAD Dec 16 02:09:51.962000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2619 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566363666343230646339383561646166313236653638373663303232 Dec 16 02:09:51.962000 audit: BPF prog-id=94 op=UNLOAD Dec 16 02:09:51.962000 audit[2639]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566363666343230646339383561646166313236653638373663303232 Dec 16 02:09:51.963000 audit: BPF prog-id=95 op=LOAD Dec 16 02:09:51.963000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2619 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566363666343230646339383561646166313236653638373663303232 Dec 16 02:09:51.963000 audit: BPF prog-id=96 op=LOAD Dec 16 02:09:51.963000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2619 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566363666343230646339383561646166313236653638373663303232 Dec 16 02:09:51.963000 audit: BPF prog-id=96 op=UNLOAD Dec 16 02:09:51.963000 audit[2639]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566363666343230646339383561646166313236653638373663303232 Dec 16 02:09:51.963000 audit: BPF prog-id=95 op=UNLOAD Dec 16 02:09:51.963000 audit[2639]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566363666343230646339383561646166313236653638373663303232 Dec 16 02:09:51.963000 audit: BPF prog-id=97 op=LOAD Dec 16 02:09:51.963000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2619 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:51.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566363666343230646339383561646166313236653638373663303232 Dec 16 02:09:51.983776 containerd[1662]: time="2025-12-16T02:09:51.983710203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-9-b4376e68e3,Uid:c459e627bb903b9316ec4cb6fa514cf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f5cf6e659e9f1c5ea3b30c006dbd2e0ad5aa547f551aec55c6039366e0277a8\"" Dec 16 02:09:51.993097 containerd[1662]: time="2025-12-16T02:09:51.993033831Z" level=info msg="CreateContainer within sandbox \"2f5cf6e659e9f1c5ea3b30c006dbd2e0ad5aa547f551aec55c6039366e0277a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 02:09:51.994205 containerd[1662]: time="2025-12-16T02:09:51.994153714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-9-b4376e68e3,Uid:220048cf79f702d12af4e5d47588f108,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea4f0c2ce76a1168810e559aff8bf209a24d6e4775567aeedf12e3986f0d197d\"" Dec 16 02:09:51.999021 containerd[1662]: time="2025-12-16T02:09:51.998988009Z" level=info msg="CreateContainer within sandbox \"ea4f0c2ce76a1168810e559aff8bf209a24d6e4775567aeedf12e3986f0d197d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 02:09:51.999990 containerd[1662]: time="2025-12-16T02:09:51.999953492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-9-b4376e68e3,Uid:52130e8df637ebe19ae48b61ae8db0a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef66f420dc985adaf126e6876c02285f8aba1a2a16d1174a32c956da17774227\"" Dec 16 02:09:52.002283 containerd[1662]: time="2025-12-16T02:09:52.002207779Z" level=info msg="Container a2a7b5c1be39e173fdca67712090e76cd8e6a29f28f0a01f36f053fc6846cf67: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:52.011467 containerd[1662]: time="2025-12-16T02:09:52.011417286Z" level=info msg="Container 02babe04c549688f0d22c7a660671b7d385ee587a5f7c9e7988e09f4d3fe91e9: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:52.013694 containerd[1662]: time="2025-12-16T02:09:52.013479972Z" level=info msg="CreateContainer within sandbox \"ef66f420dc985adaf126e6876c02285f8aba1a2a16d1174a32c956da17774227\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 02:09:52.020094 containerd[1662]: time="2025-12-16T02:09:52.020047712Z" level=info msg="CreateContainer within sandbox \"2f5cf6e659e9f1c5ea3b30c006dbd2e0ad5aa547f551aec55c6039366e0277a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a2a7b5c1be39e173fdca67712090e76cd8e6a29f28f0a01f36f053fc6846cf67\"" Dec 16 02:09:52.022021 containerd[1662]: time="2025-12-16T02:09:52.021988398Z" level=info msg="StartContainer for \"a2a7b5c1be39e173fdca67712090e76cd8e6a29f28f0a01f36f053fc6846cf67\"" Dec 16 02:09:52.023033 containerd[1662]: time="2025-12-16T02:09:52.022999601Z" level=info msg="connecting to shim a2a7b5c1be39e173fdca67712090e76cd8e6a29f28f0a01f36f053fc6846cf67" address="unix:///run/containerd/s/3356d855ad9b0823631a7107113ef91c5e91137149c365af316aef822a95ecc7" protocol=ttrpc version=3 Dec 16 02:09:52.023782 containerd[1662]: time="2025-12-16T02:09:52.023649483Z" level=info msg="CreateContainer within sandbox \"ea4f0c2ce76a1168810e559aff8bf209a24d6e4775567aeedf12e3986f0d197d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02babe04c549688f0d22c7a660671b7d385ee587a5f7c9e7988e09f4d3fe91e9\"" Dec 16 02:09:52.024220 containerd[1662]: time="2025-12-16T02:09:52.024195125Z" level=info msg="StartContainer for \"02babe04c549688f0d22c7a660671b7d385ee587a5f7c9e7988e09f4d3fe91e9\"" Dec 16 02:09:52.027274 containerd[1662]: time="2025-12-16T02:09:52.027153933Z" level=info msg="connecting to shim 02babe04c549688f0d22c7a660671b7d385ee587a5f7c9e7988e09f4d3fe91e9" address="unix:///run/containerd/s/7491a6170c01437196681a00122c4c2fdcddba3fc842c54413bbbab26e3df3d7" protocol=ttrpc version=3 Dec 16 02:09:52.028219 containerd[1662]: time="2025-12-16T02:09:52.028172617Z" level=info msg="Container 8be6ddb4933a28e7adedcc146945fe3147655aeb0c95db26e445fe9c7570d95a: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:52.037710 containerd[1662]: time="2025-12-16T02:09:52.037665405Z" level=info msg="CreateContainer within sandbox \"ef66f420dc985adaf126e6876c02285f8aba1a2a16d1174a32c956da17774227\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8be6ddb4933a28e7adedcc146945fe3147655aeb0c95db26e445fe9c7570d95a\"" Dec 16 02:09:52.038534 containerd[1662]: time="2025-12-16T02:09:52.038466687Z" level=info msg="StartContainer for \"8be6ddb4933a28e7adedcc146945fe3147655aeb0c95db26e445fe9c7570d95a\"" Dec 16 02:09:52.042406 containerd[1662]: time="2025-12-16T02:09:52.042355459Z" level=info msg="connecting to shim 8be6ddb4933a28e7adedcc146945fe3147655aeb0c95db26e445fe9c7570d95a" address="unix:///run/containerd/s/6bc3810259378e32d889803bf3fbf61823e97c10a84e4728e6b0482f0d5e2f73" protocol=ttrpc version=3 Dec 16 02:09:52.049113 systemd[1]: Started cri-containerd-a2a7b5c1be39e173fdca67712090e76cd8e6a29f28f0a01f36f053fc6846cf67.scope - libcontainer container a2a7b5c1be39e173fdca67712090e76cd8e6a29f28f0a01f36f053fc6846cf67. Dec 16 02:09:52.053025 systemd[1]: Started cri-containerd-02babe04c549688f0d22c7a660671b7d385ee587a5f7c9e7988e09f4d3fe91e9.scope - libcontainer container 02babe04c549688f0d22c7a660671b7d385ee587a5f7c9e7988e09f4d3fe91e9. Dec 16 02:09:52.064054 systemd[1]: Started cri-containerd-8be6ddb4933a28e7adedcc146945fe3147655aeb0c95db26e445fe9c7570d95a.scope - libcontainer container 8be6ddb4933a28e7adedcc146945fe3147655aeb0c95db26e445fe9c7570d95a. Dec 16 02:09:52.065000 audit: BPF prog-id=98 op=LOAD Dec 16 02:09:52.066000 audit: BPF prog-id=99 op=LOAD Dec 16 02:09:52.066000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2572 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762356331626533396531373366646361363737313230393065 Dec 16 02:09:52.066000 audit: BPF prog-id=99 op=UNLOAD Dec 16 02:09:52.066000 audit[2703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762356331626533396531373366646361363737313230393065 Dec 16 02:09:52.067000 audit: BPF prog-id=100 op=LOAD Dec 16 02:09:52.067000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2572 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762356331626533396531373366646361363737313230393065 Dec 16 02:09:52.067000 audit: BPF prog-id=101 op=LOAD Dec 16 02:09:52.067000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2572 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762356331626533396531373366646361363737313230393065 Dec 16 02:09:52.067000 audit: BPF prog-id=101 op=UNLOAD Dec 16 02:09:52.067000 audit[2703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762356331626533396531373366646361363737313230393065 Dec 16 02:09:52.067000 audit: BPF prog-id=100 op=UNLOAD Dec 16 02:09:52.067000 audit[2703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762356331626533396531373366646361363737313230393065 Dec 16 02:09:52.067000 audit: BPF prog-id=102 op=LOAD Dec 16 02:09:52.067000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2572 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132613762356331626533396531373366646361363737313230393065 Dec 16 02:09:52.068000 audit: BPF prog-id=103 op=LOAD Dec 16 02:09:52.069000 audit: BPF prog-id=104 op=LOAD Dec 16 02:09:52.069000 audit[2710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2607 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626162653034633534393638386630643232633761363630363731 Dec 16 02:09:52.069000 audit: BPF prog-id=104 op=UNLOAD Dec 16 02:09:52.069000 audit[2710]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626162653034633534393638386630643232633761363630363731 Dec 16 02:09:52.069000 audit: BPF prog-id=105 op=LOAD Dec 16 02:09:52.069000 audit[2710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2607 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626162653034633534393638386630643232633761363630363731 Dec 16 02:09:52.069000 audit: BPF prog-id=106 op=LOAD Dec 16 02:09:52.069000 audit[2710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2607 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626162653034633534393638386630643232633761363630363731 Dec 16 02:09:52.069000 audit: BPF prog-id=106 op=UNLOAD Dec 16 02:09:52.069000 audit[2710]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626162653034633534393638386630643232633761363630363731 Dec 16 02:09:52.069000 audit: BPF prog-id=105 op=UNLOAD Dec 16 02:09:52.069000 audit[2710]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626162653034633534393638386630643232633761363630363731 Dec 16 02:09:52.069000 audit: BPF prog-id=107 op=LOAD Dec 16 02:09:52.069000 audit[2710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2607 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626162653034633534393638386630643232633761363630363731 Dec 16 02:09:52.078000 audit: BPF prog-id=108 op=LOAD Dec 16 02:09:52.078000 audit: BPF prog-id=109 op=LOAD Dec 16 02:09:52.078000 audit[2727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2619 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653664646234393333613238653761646564636331343639343566 Dec 16 02:09:52.078000 audit: BPF prog-id=109 op=UNLOAD Dec 16 02:09:52.078000 audit[2727]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653664646234393333613238653761646564636331343639343566 Dec 16 02:09:52.079000 audit: BPF prog-id=110 op=LOAD Dec 16 02:09:52.079000 audit[2727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2619 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653664646234393333613238653761646564636331343639343566 Dec 16 02:09:52.079000 audit: BPF prog-id=111 op=LOAD Dec 16 02:09:52.079000 audit[2727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2619 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653664646234393333613238653761646564636331343639343566 Dec 16 02:09:52.079000 audit: BPF prog-id=111 op=UNLOAD Dec 16 02:09:52.079000 audit[2727]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653664646234393333613238653761646564636331343639343566 Dec 16 02:09:52.079000 audit: BPF prog-id=110 op=UNLOAD Dec 16 02:09:52.079000 audit[2727]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653664646234393333613238653761646564636331343639343566 Dec 16 02:09:52.079000 audit: BPF prog-id=112 op=LOAD Dec 16 02:09:52.079000 audit[2727]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2619 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:52.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653664646234393333613238653761646564636331343639343566 Dec 16 02:09:52.109634 containerd[1662]: time="2025-12-16T02:09:52.109536821Z" level=info msg="StartContainer for \"a2a7b5c1be39e173fdca67712090e76cd8e6a29f28f0a01f36f053fc6846cf67\" returns successfully" Dec 16 02:09:52.110643 containerd[1662]: time="2025-12-16T02:09:52.110570304Z" level=info msg="StartContainer for \"02babe04c549688f0d22c7a660671b7d385ee587a5f7c9e7988e09f4d3fe91e9\" returns successfully" Dec 16 02:09:52.119219 containerd[1662]: time="2025-12-16T02:09:52.119027649Z" level=info msg="StartContainer for \"8be6ddb4933a28e7adedcc146945fe3147655aeb0c95db26e445fe9c7570d95a\" returns successfully" Dec 16 02:09:52.182561 kubelet[2523]: E1216 02:09:52.182509 2523 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.26.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-9-b4376e68e3&limit=500&resourceVersion=0\": dial tcp 10.0.26.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 02:09:52.407982 update_engine[1646]: I20251216 02:09:52.407882 1646 update_attempter.cc:509] Updating boot flags... Dec 16 02:09:52.473984 kubelet[2523]: I1216 02:09:52.473942 2523 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:52.915952 kubelet[2523]: E1216 02:09:52.915658 2523 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:52.919396 kubelet[2523]: E1216 02:09:52.919159 2523 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:52.921417 kubelet[2523]: E1216 02:09:52.921398 2523 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:53.925635 kubelet[2523]: E1216 02:09:53.925593 2523 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:53.926896 kubelet[2523]: E1216 02:09:53.926431 2523 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:54.261500 kubelet[2523]: E1216 02:09:54.261350 2523 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-0-0-9-b4376e68e3\" not found" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:54.304822 kubelet[2523]: I1216 02:09:54.304778 2523 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:54.304822 kubelet[2523]: E1216 02:09:54.304824 2523 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4547-0-0-9-b4376e68e3\": node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:54.314809 kubelet[2523]: E1216 02:09:54.314779 2523 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:54.415426 kubelet[2523]: E1216 02:09:54.415349 2523 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:54.516367 kubelet[2523]: E1216 02:09:54.516241 2523 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:54.616430 kubelet[2523]: E1216 02:09:54.616395 2523 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:54.717027 kubelet[2523]: E1216 02:09:54.716960 2523 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:54.817995 kubelet[2523]: E1216 02:09:54.817885 2523 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:54.918478 kubelet[2523]: E1216 02:09:54.918430 2523 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:54.989843 kubelet[2523]: I1216 02:09:54.989784 2523 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:54.995505 kubelet[2523]: E1216 02:09:54.995310 2523 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-9-b4376e68e3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:54.995505 kubelet[2523]: I1216 02:09:54.995337 2523 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:54.996975 kubelet[2523]: E1216 02:09:54.996952 2523 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-9-b4376e68e3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:54.997220 kubelet[2523]: I1216 02:09:54.997049 2523 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:54.998648 kubelet[2523]: E1216 02:09:54.998624 2523 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:55.882047 kubelet[2523]: I1216 02:09:55.881926 2523 apiserver.go:52] "Watching apiserver" Dec 16 02:09:55.889482 kubelet[2523]: I1216 02:09:55.889445 2523 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 02:09:56.212912 kubelet[2523]: I1216 02:09:56.212689 2523 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:56.264782 systemd[1]: Reload requested from client PID 2824 ('systemctl') (unit session-10.scope)... Dec 16 02:09:56.264800 systemd[1]: Reloading... Dec 16 02:09:56.344906 zram_generator::config[2873]: No configuration found. Dec 16 02:09:56.538892 systemd[1]: Reloading finished in 273 ms. Dec 16 02:09:56.555671 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:56.569105 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 02:09:56.569372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:56.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:56.569441 systemd[1]: kubelet.service: Consumed 1.034s CPU time, 122.1M memory peak. Dec 16 02:09:56.570076 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 02:09:56.570147 kernel: audit: type=1131 audit(1765850996.568:394): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:56.571585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:56.571000 audit: BPF prog-id=113 op=LOAD Dec 16 02:09:56.573526 kernel: audit: type=1334 audit(1765850996.571:395): prog-id=113 op=LOAD Dec 16 02:09:56.573587 kernel: audit: type=1334 audit(1765850996.571:396): prog-id=114 op=LOAD Dec 16 02:09:56.571000 audit: BPF prog-id=114 op=LOAD Dec 16 02:09:56.571000 audit: BPF prog-id=64 op=UNLOAD Dec 16 02:09:56.575008 kernel: audit: type=1334 audit(1765850996.571:397): prog-id=64 op=UNLOAD Dec 16 02:09:56.575075 kernel: audit: type=1334 audit(1765850996.571:398): prog-id=65 op=UNLOAD Dec 16 02:09:56.571000 audit: BPF prog-id=65 op=UNLOAD Dec 16 02:09:56.576711 kernel: audit: type=1334 audit(1765850996.572:399): prog-id=115 op=LOAD Dec 16 02:09:56.576784 kernel: audit: type=1334 audit(1765850996.572:400): prog-id=63 op=UNLOAD Dec 16 02:09:56.572000 audit: BPF prog-id=115 op=LOAD Dec 16 02:09:56.572000 audit: BPF prog-id=63 op=UNLOAD Dec 16 02:09:56.577541 kernel: audit: type=1334 audit(1765850996.573:401): prog-id=116 op=LOAD Dec 16 02:09:56.573000 audit: BPF prog-id=116 op=LOAD Dec 16 02:09:56.578247 kernel: audit: type=1334 audit(1765850996.573:402): prog-id=66 op=UNLOAD Dec 16 02:09:56.573000 audit: BPF prog-id=66 op=UNLOAD Dec 16 02:09:56.574000 audit: BPF prog-id=117 op=LOAD Dec 16 02:09:56.574000 audit: BPF prog-id=118 op=LOAD Dec 16 02:09:56.579903 kernel: audit: type=1334 audit(1765850996.574:403): prog-id=117 op=LOAD Dec 16 02:09:56.574000 audit: BPF prog-id=67 op=UNLOAD Dec 16 02:09:56.574000 audit: BPF prog-id=68 op=UNLOAD Dec 16 02:09:56.575000 audit: BPF prog-id=119 op=LOAD Dec 16 02:09:56.575000 audit: BPF prog-id=73 op=UNLOAD Dec 16 02:09:56.576000 audit: BPF prog-id=120 op=LOAD Dec 16 02:09:56.576000 audit: BPF prog-id=72 op=UNLOAD Dec 16 02:09:56.577000 audit: BPF prog-id=121 op=LOAD Dec 16 02:09:56.585000 audit: BPF prog-id=77 op=UNLOAD Dec 16 02:09:56.585000 audit: BPF prog-id=122 op=LOAD Dec 16 02:09:56.585000 audit: BPF prog-id=123 op=LOAD Dec 16 02:09:56.585000 audit: BPF prog-id=78 op=UNLOAD Dec 16 02:09:56.585000 audit: BPF prog-id=79 op=UNLOAD Dec 16 02:09:56.586000 audit: BPF prog-id=124 op=LOAD Dec 16 02:09:56.586000 audit: BPF prog-id=80 op=UNLOAD Dec 16 02:09:56.586000 audit: BPF prog-id=125 op=LOAD Dec 16 02:09:56.586000 audit: BPF prog-id=126 op=LOAD Dec 16 02:09:56.586000 audit: BPF prog-id=81 op=UNLOAD Dec 16 02:09:56.586000 audit: BPF prog-id=82 op=UNLOAD Dec 16 02:09:56.587000 audit: BPF prog-id=127 op=LOAD Dec 16 02:09:56.587000 audit: BPF prog-id=74 op=UNLOAD Dec 16 02:09:56.587000 audit: BPF prog-id=128 op=LOAD Dec 16 02:09:56.587000 audit: BPF prog-id=129 op=LOAD Dec 16 02:09:56.587000 audit: BPF prog-id=75 op=UNLOAD Dec 16 02:09:56.587000 audit: BPF prog-id=76 op=UNLOAD Dec 16 02:09:56.589000 audit: BPF prog-id=130 op=LOAD Dec 16 02:09:56.589000 audit: BPF prog-id=69 op=UNLOAD Dec 16 02:09:56.589000 audit: BPF prog-id=131 op=LOAD Dec 16 02:09:56.589000 audit: BPF prog-id=132 op=LOAD Dec 16 02:09:56.589000 audit: BPF prog-id=70 op=UNLOAD Dec 16 02:09:56.589000 audit: BPF prog-id=71 op=UNLOAD Dec 16 02:09:58.306090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:58.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:58.329193 (kubelet)[2916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:09:58.362328 kubelet[2916]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:09:58.362328 kubelet[2916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:09:58.362662 kubelet[2916]: I1216 02:09:58.362355 2916 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:09:58.368894 kubelet[2916]: I1216 02:09:58.368043 2916 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 02:09:58.368894 kubelet[2916]: I1216 02:09:58.368069 2916 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:09:58.368894 kubelet[2916]: I1216 02:09:58.368097 2916 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 02:09:58.368894 kubelet[2916]: I1216 02:09:58.368104 2916 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:09:58.368894 kubelet[2916]: I1216 02:09:58.368298 2916 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 02:09:58.369754 kubelet[2916]: I1216 02:09:58.369727 2916 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 02:09:58.371800 kubelet[2916]: I1216 02:09:58.371761 2916 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:09:58.376490 kubelet[2916]: I1216 02:09:58.376465 2916 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:09:58.379142 kubelet[2916]: I1216 02:09:58.379107 2916 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 02:09:58.379481 kubelet[2916]: I1216 02:09:58.379428 2916 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:09:58.379710 kubelet[2916]: I1216 02:09:58.379494 2916 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-9-b4376e68e3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:09:58.379791 kubelet[2916]: I1216 02:09:58.379712 2916 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:09:58.379791 kubelet[2916]: I1216 02:09:58.379721 2916 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 02:09:58.379791 kubelet[2916]: I1216 02:09:58.379745 2916 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 02:09:58.380619 kubelet[2916]: I1216 02:09:58.380602 2916 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:09:58.380794 kubelet[2916]: I1216 02:09:58.380780 2916 kubelet.go:475] "Attempting to sync node with API server" Dec 16 02:09:58.380835 kubelet[2916]: I1216 02:09:58.380803 2916 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:09:58.380835 kubelet[2916]: I1216 02:09:58.380830 2916 kubelet.go:387] "Adding apiserver pod source" Dec 16 02:09:58.380915 kubelet[2916]: I1216 02:09:58.380840 2916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:09:58.386043 kubelet[2916]: I1216 02:09:58.385999 2916 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:09:58.387507 kubelet[2916]: I1216 02:09:58.387484 2916 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 02:09:58.387639 kubelet[2916]: I1216 02:09:58.387626 2916 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 02:09:58.389870 kubelet[2916]: I1216 02:09:58.389803 2916 server.go:1262] "Started kubelet" Dec 16 02:09:58.390122 kubelet[2916]: I1216 02:09:58.390093 2916 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:09:58.390639 kubelet[2916]: I1216 02:09:58.390604 2916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:09:58.390774 kubelet[2916]: I1216 02:09:58.390726 2916 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:09:58.390854 kubelet[2916]: I1216 02:09:58.390842 2916 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 02:09:58.391052 kubelet[2916]: I1216 02:09:58.391019 2916 server.go:310] "Adding debug handlers to kubelet server" Dec 16 02:09:58.391248 kubelet[2916]: I1216 02:09:58.391230 2916 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:09:58.392415 kubelet[2916]: I1216 02:09:58.392375 2916 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:09:58.396924 kubelet[2916]: I1216 02:09:58.396886 2916 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 02:09:58.396924 kubelet[2916]: I1216 02:09:58.396989 2916 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 02:09:58.396924 kubelet[2916]: I1216 02:09:58.397264 2916 reconciler.go:29] "Reconciler: start to sync state" Dec 16 02:09:58.397733 kubelet[2916]: E1216 02:09:58.397710 2916 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-0-0-9-b4376e68e3\" not found" Dec 16 02:09:58.405200 kubelet[2916]: E1216 02:09:58.405009 2916 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:09:58.405200 kubelet[2916]: I1216 02:09:58.405807 2916 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:09:58.408318 kubelet[2916]: I1216 02:09:58.407771 2916 factory.go:223] Registration of the containerd container factory successfully Dec 16 02:09:58.408318 kubelet[2916]: I1216 02:09:58.407792 2916 factory.go:223] Registration of the systemd container factory successfully Dec 16 02:09:58.411690 kubelet[2916]: I1216 02:09:58.411651 2916 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 02:09:58.420699 kubelet[2916]: I1216 02:09:58.420671 2916 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 02:09:58.420699 kubelet[2916]: I1216 02:09:58.420697 2916 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 02:09:58.420808 kubelet[2916]: I1216 02:09:58.420717 2916 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 02:09:58.420808 kubelet[2916]: E1216 02:09:58.420758 2916 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:09:58.443112 kubelet[2916]: I1216 02:09:58.443085 2916 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:09:58.443209 kubelet[2916]: I1216 02:09:58.443104 2916 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:09:58.443209 kubelet[2916]: I1216 02:09:58.443156 2916 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:09:58.443447 kubelet[2916]: I1216 02:09:58.443286 2916 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 02:09:58.443447 kubelet[2916]: I1216 02:09:58.443297 2916 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 02:09:58.443447 kubelet[2916]: I1216 02:09:58.443311 2916 policy_none.go:49] "None policy: Start" Dec 16 02:09:58.443447 kubelet[2916]: I1216 02:09:58.443320 2916 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 02:09:58.443447 kubelet[2916]: I1216 02:09:58.443327 2916 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 02:09:58.443447 kubelet[2916]: I1216 02:09:58.443415 2916 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 02:09:58.443447 kubelet[2916]: I1216 02:09:58.443425 2916 policy_none.go:47] "Start" Dec 16 02:09:58.447628 kubelet[2916]: E1216 02:09:58.447604 2916 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 02:09:58.447973 kubelet[2916]: I1216 02:09:58.447772 2916 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:09:58.447973 kubelet[2916]: I1216 02:09:58.447789 2916 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:09:58.448065 kubelet[2916]: I1216 02:09:58.447995 2916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:09:58.449149 kubelet[2916]: E1216 02:09:58.449100 2916 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:09:58.522263 kubelet[2916]: I1216 02:09:58.522217 2916 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.522263 kubelet[2916]: I1216 02:09:58.522270 2916 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.522488 kubelet[2916]: I1216 02:09:58.522289 2916 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.529623 kubelet[2916]: E1216 02:09:58.529552 2916 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-9-b4376e68e3\" already exists" pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.554578 kubelet[2916]: I1216 02:09:58.554546 2916 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.561945 kubelet[2916]: I1216 02:09:58.561834 2916 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.562949 kubelet[2916]: I1216 02:09:58.562926 2916 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.599558 kubelet[2916]: I1216 02:09:58.599371 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52130e8df637ebe19ae48b61ae8db0a3-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-9-b4376e68e3\" (UID: \"52130e8df637ebe19ae48b61ae8db0a3\") " pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.599558 kubelet[2916]: I1216 02:09:58.599498 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.599558 kubelet[2916]: I1216 02:09:58.599524 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/220048cf79f702d12af4e5d47588f108-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-9-b4376e68e3\" (UID: \"220048cf79f702d12af4e5d47588f108\") " pod="kube-system/kube-scheduler-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.599820 kubelet[2916]: I1216 02:09:58.599803 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52130e8df637ebe19ae48b61ae8db0a3-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-9-b4376e68e3\" (UID: \"52130e8df637ebe19ae48b61ae8db0a3\") " pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.599946 kubelet[2916]: I1216 02:09:58.599930 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52130e8df637ebe19ae48b61ae8db0a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-9-b4376e68e3\" (UID: \"52130e8df637ebe19ae48b61ae8db0a3\") " pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.600022 kubelet[2916]: I1216 02:09:58.600009 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.600078 kubelet[2916]: I1216 02:09:58.600068 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.600149 kubelet[2916]: I1216 02:09:58.600128 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:58.600264 kubelet[2916]: I1216 02:09:58.600218 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c459e627bb903b9316ec4cb6fa514cf6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-9-b4376e68e3\" (UID: \"c459e627bb903b9316ec4cb6fa514cf6\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:59.381884 kubelet[2916]: I1216 02:09:59.381794 2916 apiserver.go:52] "Watching apiserver" Dec 16 02:09:59.397468 kubelet[2916]: I1216 02:09:59.397408 2916 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 02:09:59.435089 kubelet[2916]: I1216 02:09:59.433440 2916 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:59.438380 kubelet[2916]: E1216 02:09:59.438206 2916 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-9-b4376e68e3\" already exists" pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" Dec 16 02:09:59.483899 kubelet[2916]: I1216 02:09:59.483672 2916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-0-0-9-b4376e68e3" podStartSLOduration=3.483653956 podStartE2EDuration="3.483653956s" podCreationTimestamp="2025-12-16 02:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:59.473362565 +0000 UTC m=+1.141397707" watchObservedRunningTime="2025-12-16 02:09:59.483653956 +0000 UTC m=+1.151689098" Dec 16 02:09:59.484063 kubelet[2916]: I1216 02:09:59.483978 2916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-0-0-9-b4376e68e3" podStartSLOduration=1.4839719169999999 podStartE2EDuration="1.483971917s" podCreationTimestamp="2025-12-16 02:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:59.483482475 +0000 UTC m=+1.151517617" watchObservedRunningTime="2025-12-16 02:09:59.483971917 +0000 UTC m=+1.152007059" Dec 16 02:09:59.502855 kubelet[2916]: I1216 02:09:59.502785 2916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-0-0-9-b4376e68e3" podStartSLOduration=1.502768733 podStartE2EDuration="1.502768733s" podCreationTimestamp="2025-12-16 02:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:59.492917024 +0000 UTC m=+1.160952246" watchObservedRunningTime="2025-12-16 02:09:59.502768733 +0000 UTC m=+1.170803875" Dec 16 02:10:01.454878 kubelet[2916]: I1216 02:10:01.454771 2916 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 02:10:01.455178 containerd[1662]: time="2025-12-16T02:10:01.455109153Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 02:10:01.455351 kubelet[2916]: I1216 02:10:01.455277 2916 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 02:10:02.514575 systemd[1]: Created slice kubepods-besteffort-pod7fdc6b28_9060_4972_a64a_f8d818d3fd0a.slice - libcontainer container kubepods-besteffort-pod7fdc6b28_9060_4972_a64a_f8d818d3fd0a.slice. Dec 16 02:10:02.526062 kubelet[2916]: I1216 02:10:02.526010 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7fdc6b28-9060-4972-a64a-f8d818d3fd0a-kube-proxy\") pod \"kube-proxy-s5686\" (UID: \"7fdc6b28-9060-4972-a64a-f8d818d3fd0a\") " pod="kube-system/kube-proxy-s5686" Dec 16 02:10:02.526062 kubelet[2916]: I1216 02:10:02.526059 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fdc6b28-9060-4972-a64a-f8d818d3fd0a-xtables-lock\") pod \"kube-proxy-s5686\" (UID: \"7fdc6b28-9060-4972-a64a-f8d818d3fd0a\") " pod="kube-system/kube-proxy-s5686" Dec 16 02:10:02.526389 kubelet[2916]: I1216 02:10:02.526079 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fdc6b28-9060-4972-a64a-f8d818d3fd0a-lib-modules\") pod \"kube-proxy-s5686\" (UID: \"7fdc6b28-9060-4972-a64a-f8d818d3fd0a\") " pod="kube-system/kube-proxy-s5686" Dec 16 02:10:02.526389 kubelet[2916]: I1216 02:10:02.526096 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff45n\" (UniqueName: \"kubernetes.io/projected/7fdc6b28-9060-4972-a64a-f8d818d3fd0a-kube-api-access-ff45n\") pod \"kube-proxy-s5686\" (UID: \"7fdc6b28-9060-4972-a64a-f8d818d3fd0a\") " pod="kube-system/kube-proxy-s5686" Dec 16 02:10:02.627380 systemd[1]: Created slice kubepods-besteffort-pod3f495adf_c713_46db_8f8d_d9542dbc4ddd.slice - libcontainer container kubepods-besteffort-pod3f495adf_c713_46db_8f8d_d9542dbc4ddd.slice. Dec 16 02:10:02.727926 kubelet[2916]: I1216 02:10:02.727822 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zgd4\" (UniqueName: \"kubernetes.io/projected/3f495adf-c713-46db-8f8d-d9542dbc4ddd-kube-api-access-6zgd4\") pod \"tigera-operator-65cdcdfd6d-xc4jl\" (UID: \"3f495adf-c713-46db-8f8d-d9542dbc4ddd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xc4jl" Dec 16 02:10:02.728105 kubelet[2916]: I1216 02:10:02.728089 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f495adf-c713-46db-8f8d-d9542dbc4ddd-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-xc4jl\" (UID: \"3f495adf-c713-46db-8f8d-d9542dbc4ddd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xc4jl" Dec 16 02:10:02.829987 containerd[1662]: time="2025-12-16T02:10:02.829897680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5686,Uid:7fdc6b28-9060-4972-a64a-f8d818d3fd0a,Namespace:kube-system,Attempt:0,}" Dec 16 02:10:02.853354 containerd[1662]: time="2025-12-16T02:10:02.853264190Z" level=info msg="connecting to shim eee6ce79f98ec00bc3e1d52a8f6b42bf6be2f5be3cb896e30b8029a4f138270c" address="unix:///run/containerd/s/5e7dafc90e503ea12f52ea84bf4bf09ec12700bfb10ff05cd3cc8e373ab1916b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:02.878064 systemd[1]: Started cri-containerd-eee6ce79f98ec00bc3e1d52a8f6b42bf6be2f5be3cb896e30b8029a4f138270c.scope - libcontainer container eee6ce79f98ec00bc3e1d52a8f6b42bf6be2f5be3cb896e30b8029a4f138270c. Dec 16 02:10:02.886000 audit: BPF prog-id=133 op=LOAD Dec 16 02:10:02.888738 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 02:10:02.888803 kernel: audit: type=1334 audit(1765851002.886:436): prog-id=133 op=LOAD Dec 16 02:10:02.888820 kernel: audit: type=1334 audit(1765851002.886:437): prog-id=134 op=LOAD Dec 16 02:10:02.886000 audit: BPF prog-id=134 op=LOAD Dec 16 02:10:02.886000 audit[2991]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.893434 kernel: audit: type=1300 audit(1765851002.886:437): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.893491 kernel: audit: type=1327 audit(1765851002.886:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.887000 audit: BPF prog-id=134 op=UNLOAD Dec 16 02:10:02.898011 kernel: audit: type=1334 audit(1765851002.887:438): prog-id=134 op=UNLOAD Dec 16 02:10:02.898086 kernel: audit: type=1300 audit(1765851002.887:438): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.887000 audit[2991]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.905633 kernel: audit: type=1327 audit(1765851002.887:438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.905697 kernel: audit: type=1334 audit(1765851002.888:439): prog-id=135 op=LOAD Dec 16 02:10:02.888000 audit: BPF prog-id=135 op=LOAD Dec 16 02:10:02.906559 kernel: audit: type=1300 audit(1765851002.888:439): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.888000 audit[2991]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.913889 kernel: audit: type=1327 audit(1765851002.888:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.892000 audit: BPF prog-id=136 op=LOAD Dec 16 02:10:02.892000 audit[2991]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.896000 audit: BPF prog-id=136 op=UNLOAD Dec 16 02:10:02.896000 audit[2991]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.896000 audit: BPF prog-id=135 op=UNLOAD Dec 16 02:10:02.896000 audit[2991]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.896000 audit: BPF prog-id=137 op=LOAD Dec 16 02:10:02.896000 audit[2991]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2980 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653663653739663938656330306263336531643532613866366234 Dec 16 02:10:02.924892 containerd[1662]: time="2025-12-16T02:10:02.924832685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5686,Uid:7fdc6b28-9060-4972-a64a-f8d818d3fd0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"eee6ce79f98ec00bc3e1d52a8f6b42bf6be2f5be3cb896e30b8029a4f138270c\"" Dec 16 02:10:02.931989 containerd[1662]: time="2025-12-16T02:10:02.931960227Z" level=info msg="CreateContainer within sandbox \"eee6ce79f98ec00bc3e1d52a8f6b42bf6be2f5be3cb896e30b8029a4f138270c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 02:10:02.932360 containerd[1662]: time="2025-12-16T02:10:02.932330868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xc4jl,Uid:3f495adf-c713-46db-8f8d-d9542dbc4ddd,Namespace:tigera-operator,Attempt:0,}" Dec 16 02:10:02.943339 containerd[1662]: time="2025-12-16T02:10:02.943298181Z" level=info msg="Container 068bcb608ac539f263095954b7a8836588b6fd1fc0650852293b4953975f21f6: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:02.953393 containerd[1662]: time="2025-12-16T02:10:02.953352251Z" level=info msg="CreateContainer within sandbox \"eee6ce79f98ec00bc3e1d52a8f6b42bf6be2f5be3cb896e30b8029a4f138270c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"068bcb608ac539f263095954b7a8836588b6fd1fc0650852293b4953975f21f6\"" Dec 16 02:10:02.956125 containerd[1662]: time="2025-12-16T02:10:02.955432017Z" level=info msg="StartContainer for \"068bcb608ac539f263095954b7a8836588b6fd1fc0650852293b4953975f21f6\"" Dec 16 02:10:02.957078 containerd[1662]: time="2025-12-16T02:10:02.957045182Z" level=info msg="connecting to shim 068bcb608ac539f263095954b7a8836588b6fd1fc0650852293b4953975f21f6" address="unix:///run/containerd/s/5e7dafc90e503ea12f52ea84bf4bf09ec12700bfb10ff05cd3cc8e373ab1916b" protocol=ttrpc version=3 Dec 16 02:10:02.964980 containerd[1662]: time="2025-12-16T02:10:02.964924325Z" level=info msg="connecting to shim f2496d18b1427e3ea6ad424de9cdbed45f8ed17b271f679d2c395e238c184f4c" address="unix:///run/containerd/s/8a16b4c6652a1d01a9320f57ecb7c293e8adb4a4e41c776ac86538cbaa41e689" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:02.975073 systemd[1]: Started cri-containerd-068bcb608ac539f263095954b7a8836588b6fd1fc0650852293b4953975f21f6.scope - libcontainer container 068bcb608ac539f263095954b7a8836588b6fd1fc0650852293b4953975f21f6. Dec 16 02:10:02.992142 systemd[1]: Started cri-containerd-f2496d18b1427e3ea6ad424de9cdbed45f8ed17b271f679d2c395e238c184f4c.scope - libcontainer container f2496d18b1427e3ea6ad424de9cdbed45f8ed17b271f679d2c395e238c184f4c. Dec 16 02:10:03.001000 audit: BPF prog-id=138 op=LOAD Dec 16 02:10:03.002000 audit: BPF prog-id=139 op=LOAD Dec 16 02:10:03.002000 audit[3047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3030 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632343936643138623134323765336561366164343234646539636462 Dec 16 02:10:03.002000 audit: BPF prog-id=139 op=UNLOAD Dec 16 02:10:03.002000 audit[3047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632343936643138623134323765336561366164343234646539636462 Dec 16 02:10:03.002000 audit: BPF prog-id=140 op=LOAD Dec 16 02:10:03.002000 audit[3047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3030 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632343936643138623134323765336561366164343234646539636462 Dec 16 02:10:03.002000 audit: BPF prog-id=141 op=LOAD Dec 16 02:10:03.002000 audit[3047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3030 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632343936643138623134323765336561366164343234646539636462 Dec 16 02:10:03.002000 audit: BPF prog-id=141 op=UNLOAD Dec 16 02:10:03.002000 audit[3047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632343936643138623134323765336561366164343234646539636462 Dec 16 02:10:03.002000 audit: BPF prog-id=140 op=UNLOAD Dec 16 02:10:03.002000 audit[3047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632343936643138623134323765336561366164343234646539636462 Dec 16 02:10:03.002000 audit: BPF prog-id=142 op=LOAD Dec 16 02:10:03.002000 audit[3047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3030 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632343936643138623134323765336561366164343234646539636462 Dec 16 02:10:03.019000 audit: BPF prog-id=143 op=LOAD Dec 16 02:10:03.019000 audit[3017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2980 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036386263623630386163353339663236333039353935346237613838 Dec 16 02:10:03.019000 audit: BPF prog-id=144 op=LOAD Dec 16 02:10:03.019000 audit[3017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2980 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036386263623630386163353339663236333039353935346237613838 Dec 16 02:10:03.019000 audit: BPF prog-id=144 op=UNLOAD Dec 16 02:10:03.019000 audit[3017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036386263623630386163353339663236333039353935346237613838 Dec 16 02:10:03.019000 audit: BPF prog-id=143 op=UNLOAD Dec 16 02:10:03.019000 audit[3017]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036386263623630386163353339663236333039353935346237613838 Dec 16 02:10:03.019000 audit: BPF prog-id=145 op=LOAD Dec 16 02:10:03.019000 audit[3017]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2980 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036386263623630386163353339663236333039353935346237613838 Dec 16 02:10:03.032589 containerd[1662]: time="2025-12-16T02:10:03.032544528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xc4jl,Uid:3f495adf-c713-46db-8f8d-d9542dbc4ddd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f2496d18b1427e3ea6ad424de9cdbed45f8ed17b271f679d2c395e238c184f4c\"" Dec 16 02:10:03.034444 containerd[1662]: time="2025-12-16T02:10:03.034407934Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 02:10:03.043090 containerd[1662]: time="2025-12-16T02:10:03.043051960Z" level=info msg="StartContainer for \"068bcb608ac539f263095954b7a8836588b6fd1fc0650852293b4953975f21f6\" returns successfully" Dec 16 02:10:03.275000 audit[3126]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.275000 audit[3126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf7b5b90 a2=0 a3=1 items=0 ppid=3054 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:10:03.275000 audit[3127]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.275000 audit[3127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc0e223c0 a2=0 a3=1 items=0 ppid=3054 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.275000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:10:03.276000 audit[3129]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.276000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca3050a0 a2=0 a3=1 items=0 ppid=3054 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:10:03.277000 audit[3131]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.277000 audit[3131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcfb9afa0 a2=0 a3=1 items=0 ppid=3054 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:10:03.277000 audit[3132]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.277000 audit[3132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff89fa3d0 a2=0 a3=1 items=0 ppid=3054 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:10:03.278000 audit[3134]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.278000 audit[3134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd7e0950 a2=0 a3=1 items=0 ppid=3054 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:10:03.382000 audit[3136]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.382000 audit[3136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe008add0 a2=0 a3=1 items=0 ppid=3054 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:10:03.385000 audit[3138]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.385000 audit[3138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcc25f030 a2=0 a3=1 items=0 ppid=3054 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.385000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 02:10:03.390000 audit[3141]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.390000 audit[3141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc88cc1e0 a2=0 a3=1 items=0 ppid=3054 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 02:10:03.391000 audit[3142]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.391000 audit[3142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5f02330 a2=0 a3=1 items=0 ppid=3054 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.391000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:10:03.394000 audit[3144]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.394000 audit[3144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe3366fd0 a2=0 a3=1 items=0 ppid=3054 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:10:03.395000 audit[3145]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.395000 audit[3145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9963410 a2=0 a3=1 items=0 ppid=3054 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:10:03.397000 audit[3147]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.397000 audit[3147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffddbbd580 a2=0 a3=1 items=0 ppid=3054 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:10:03.401000 audit[3150]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.401000 audit[3150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc99c1380 a2=0 a3=1 items=0 ppid=3054 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.401000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:10:03.402000 audit[3151]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.402000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff34c1300 a2=0 a3=1 items=0 ppid=3054 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:10:03.404000 audit[3153]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.404000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe528c3d0 a2=0 a3=1 items=0 ppid=3054 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:10:03.405000 audit[3154]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.405000 audit[3154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffffc25940 a2=0 a3=1 items=0 ppid=3054 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:10:03.407000 audit[3156]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.407000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc28841c0 a2=0 a3=1 items=0 ppid=3054 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 02:10:03.411000 audit[3159]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.411000 audit[3159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd0d697a0 a2=0 a3=1 items=0 ppid=3054 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 02:10:03.415000 audit[3162]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.415000 audit[3162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcc2d9760 a2=0 a3=1 items=0 ppid=3054 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 02:10:03.416000 audit[3163]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.416000 audit[3163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdcc198b0 a2=0 a3=1 items=0 ppid=3054 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:10:03.419000 audit[3165]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.419000 audit[3165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdd4fce00 a2=0 a3=1 items=0 ppid=3054 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:10:03.422000 audit[3168]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.422000 audit[3168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd277af80 a2=0 a3=1 items=0 ppid=3054 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:10:03.423000 audit[3169]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.423000 audit[3169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd024440 a2=0 a3=1 items=0 ppid=3054 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:10:03.426000 audit[3171]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:10:03.426000 audit[3171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff180c1e0 a2=0 a3=1 items=0 ppid=3054 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:10:03.453000 audit[3177]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:03.453000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcc7a4220 a2=0 a3=1 items=0 ppid=3054 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.462000 audit[3177]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:03.462000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffcc7a4220 a2=0 a3=1 items=0 ppid=3054 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.463000 audit[3182]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.463000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd2211c70 a2=0 a3=1 items=0 ppid=3054 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:10:03.468000 audit[3184]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.468000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff4058430 a2=0 a3=1 items=0 ppid=3054 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 02:10:03.472000 audit[3187]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.472000 audit[3187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd440ffd0 a2=0 a3=1 items=0 ppid=3054 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 02:10:03.473000 audit[3188]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.473000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4abce50 a2=0 a3=1 items=0 ppid=3054 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:10:03.475000 audit[3190]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.475000 audit[3190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff4764ac0 a2=0 a3=1 items=0 ppid=3054 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:10:03.477000 audit[3191]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.477000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeba16770 a2=0 a3=1 items=0 ppid=3054 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:10:03.479000 audit[3193]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.479000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffde1cb2e0 a2=0 a3=1 items=0 ppid=3054 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:10:03.483000 audit[3196]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.483000 audit[3196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff6a57ac0 a2=0 a3=1 items=0 ppid=3054 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.483000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:10:03.484000 audit[3197]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.484000 audit[3197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff29c7200 a2=0 a3=1 items=0 ppid=3054 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:10:03.487000 audit[3199]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.487000 audit[3199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd6cf6640 a2=0 a3=1 items=0 ppid=3054 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:10:03.488000 audit[3200]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3200 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.488000 audit[3200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcabd26c0 a2=0 a3=1 items=0 ppid=3054 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.488000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:10:03.490000 audit[3202]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.490000 audit[3202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd9e94980 a2=0 a3=1 items=0 ppid=3054 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 02:10:03.494000 audit[3205]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.494000 audit[3205]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc79fe320 a2=0 a3=1 items=0 ppid=3054 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 02:10:03.497000 audit[3208]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.497000 audit[3208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcacb0cb0 a2=0 a3=1 items=0 ppid=3054 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 02:10:03.498000 audit[3209]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.498000 audit[3209]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe41e9d30 a2=0 a3=1 items=0 ppid=3054 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:10:03.501000 audit[3211]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.501000 audit[3211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe74a74f0 a2=0 a3=1 items=0 ppid=3054 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:10:03.504000 audit[3214]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.504000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc2ee0ad0 a2=0 a3=1 items=0 ppid=3054 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:10:03.505000 audit[3215]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.505000 audit[3215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4cfe820 a2=0 a3=1 items=0 ppid=3054 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:10:03.507000 audit[3217]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.507000 audit[3217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffea5b77e0 a2=0 a3=1 items=0 ppid=3054 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:10:03.509000 audit[3218]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.509000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1e6bf40 a2=0 a3=1 items=0 ppid=3054 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:10:03.511000 audit[3220]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.511000 audit[3220]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffecb2f780 a2=0 a3=1 items=0 ppid=3054 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.511000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:10:03.514000 audit[3223]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:10:03.514000 audit[3223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe1972730 a2=0 a3=1 items=0 ppid=3054 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.514000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:10:03.517000 audit[3225]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:10:03.517000 audit[3225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffcf54b9f0 a2=0 a3=1 items=0 ppid=3054 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.517000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.518000 audit[3225]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:10:03.518000 audit[3225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffcf54b9f0 a2=0 a3=1 items=0 ppid=3054 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.518000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:04.866024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004016645.mount: Deactivated successfully. Dec 16 02:10:05.109852 kubelet[2916]: I1216 02:10:05.109696 2916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s5686" podStartSLOduration=3.109680603 podStartE2EDuration="3.109680603s" podCreationTimestamp="2025-12-16 02:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:10:03.466719512 +0000 UTC m=+5.134754654" watchObservedRunningTime="2025-12-16 02:10:05.109680603 +0000 UTC m=+6.777715745" Dec 16 02:10:05.200991 containerd[1662]: time="2025-12-16T02:10:05.200927317Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:05.202350 containerd[1662]: time="2025-12-16T02:10:05.202295441Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 02:10:05.203406 containerd[1662]: time="2025-12-16T02:10:05.203370565Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:05.205611 containerd[1662]: time="2025-12-16T02:10:05.205568811Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:05.206716 containerd[1662]: time="2025-12-16T02:10:05.206685295Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.172236241s" Dec 16 02:10:05.206747 containerd[1662]: time="2025-12-16T02:10:05.206730815Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 02:10:05.212162 containerd[1662]: time="2025-12-16T02:10:05.212123431Z" level=info msg="CreateContainer within sandbox \"f2496d18b1427e3ea6ad424de9cdbed45f8ed17b271f679d2c395e238c184f4c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 02:10:05.219621 containerd[1662]: time="2025-12-16T02:10:05.219578933Z" level=info msg="Container 01d63070ef40928d0d88c04b69f25389d72fe30392c61b3d6b69129c14fe3923: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:05.222598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175696410.mount: Deactivated successfully. Dec 16 02:10:05.228319 containerd[1662]: time="2025-12-16T02:10:05.228196639Z" level=info msg="CreateContainer within sandbox \"f2496d18b1427e3ea6ad424de9cdbed45f8ed17b271f679d2c395e238c184f4c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"01d63070ef40928d0d88c04b69f25389d72fe30392c61b3d6b69129c14fe3923\"" Dec 16 02:10:05.228899 containerd[1662]: time="2025-12-16T02:10:05.228804081Z" level=info msg="StartContainer for \"01d63070ef40928d0d88c04b69f25389d72fe30392c61b3d6b69129c14fe3923\"" Dec 16 02:10:05.230196 containerd[1662]: time="2025-12-16T02:10:05.230068285Z" level=info msg="connecting to shim 01d63070ef40928d0d88c04b69f25389d72fe30392c61b3d6b69129c14fe3923" address="unix:///run/containerd/s/8a16b4c6652a1d01a9320f57ecb7c293e8adb4a4e41c776ac86538cbaa41e689" protocol=ttrpc version=3 Dec 16 02:10:05.252240 systemd[1]: Started cri-containerd-01d63070ef40928d0d88c04b69f25389d72fe30392c61b3d6b69129c14fe3923.scope - libcontainer container 01d63070ef40928d0d88c04b69f25389d72fe30392c61b3d6b69129c14fe3923. Dec 16 02:10:05.261000 audit: BPF prog-id=146 op=LOAD Dec 16 02:10:05.262000 audit: BPF prog-id=147 op=LOAD Dec 16 02:10:05.262000 audit[3234]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3030 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643633303730656634303932386430643838633034623639663235 Dec 16 02:10:05.262000 audit: BPF prog-id=147 op=UNLOAD Dec 16 02:10:05.262000 audit[3234]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643633303730656634303932386430643838633034623639663235 Dec 16 02:10:05.262000 audit: BPF prog-id=148 op=LOAD Dec 16 02:10:05.262000 audit[3234]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3030 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643633303730656634303932386430643838633034623639663235 Dec 16 02:10:05.262000 audit: BPF prog-id=149 op=LOAD Dec 16 02:10:05.262000 audit[3234]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3030 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643633303730656634303932386430643838633034623639663235 Dec 16 02:10:05.262000 audit: BPF prog-id=149 op=UNLOAD Dec 16 02:10:05.262000 audit[3234]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643633303730656634303932386430643838633034623639663235 Dec 16 02:10:05.262000 audit: BPF prog-id=148 op=UNLOAD Dec 16 02:10:05.262000 audit[3234]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643633303730656634303932386430643838633034623639663235 Dec 16 02:10:05.262000 audit: BPF prog-id=150 op=LOAD Dec 16 02:10:05.262000 audit[3234]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3030 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643633303730656634303932386430643838633034623639663235 Dec 16 02:10:05.279530 containerd[1662]: time="2025-12-16T02:10:05.279493913Z" level=info msg="StartContainer for \"01d63070ef40928d0d88c04b69f25389d72fe30392c61b3d6b69129c14fe3923\" returns successfully" Dec 16 02:10:05.463979 kubelet[2916]: I1216 02:10:05.462473 2916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-xc4jl" podStartSLOduration=1.288871018 podStartE2EDuration="3.462454262s" podCreationTimestamp="2025-12-16 02:10:02 +0000 UTC" firstStartedPulling="2025-12-16 02:10:03.033903813 +0000 UTC m=+4.701938955" lastFinishedPulling="2025-12-16 02:10:05.207487057 +0000 UTC m=+6.875522199" observedRunningTime="2025-12-16 02:10:05.462437582 +0000 UTC m=+7.130472724" watchObservedRunningTime="2025-12-16 02:10:05.462454262 +0000 UTC m=+7.130489364" Dec 16 02:10:10.623997 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 02:10:10.624122 kernel: audit: type=1106 audit(1765851010.621:516): pid=1943 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:10:10.621000 audit[1943]: USER_END pid=1943 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:10:10.622670 sudo[1943]: pam_unix(sudo:session): session closed for user root Dec 16 02:10:10.622000 audit[1943]: CRED_DISP pid=1943 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:10:10.630166 kernel: audit: type=1104 audit(1765851010.622:517): pid=1943 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:10:10.779875 sshd[1942]: Connection closed by 139.178.68.195 port 58822 Dec 16 02:10:10.780240 sshd-session[1938]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:10.780000 audit[1938]: USER_END pid=1938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:10:10.787706 systemd[1]: sshd@8-10.0.26.207:22-139.178.68.195:58822.service: Deactivated successfully. Dec 16 02:10:10.784000 audit[1938]: CRED_DISP pid=1938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:10:10.792340 kernel: audit: type=1106 audit(1765851010.780:518): pid=1938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:10:10.792410 kernel: audit: type=1104 audit(1765851010.784:519): pid=1938 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:10:10.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.26.207:22-139.178.68.195:58822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:10.792681 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 02:10:10.792989 systemd[1]: session-10.scope: Consumed 6.342s CPU time, 229.5M memory peak. Dec 16 02:10:10.795021 kernel: audit: type=1131 audit(1765851010.787:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.26.207:22-139.178.68.195:58822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:10.797104 systemd-logind[1644]: Session 10 logged out. Waiting for processes to exit. Dec 16 02:10:10.798629 systemd-logind[1644]: Removed session 10. Dec 16 02:10:12.291000 audit[3324]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:12.291000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc93bcc50 a2=0 a3=1 items=0 ppid=3054 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:12.298261 kernel: audit: type=1325 audit(1765851012.291:521): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:12.298344 kernel: audit: type=1300 audit(1765851012.291:521): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc93bcc50 a2=0 a3=1 items=0 ppid=3054 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:12.299913 kernel: audit: type=1327 audit(1765851012.291:521): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:12.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:12.299000 audit[3324]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:12.302524 kernel: audit: type=1325 audit(1765851012.299:522): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:12.299000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc93bcc50 a2=0 a3=1 items=0 ppid=3054 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:12.299000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:12.307923 kernel: audit: type=1300 audit(1765851012.299:522): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc93bcc50 a2=0 a3=1 items=0 ppid=3054 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:13.316000 audit[3326]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:13.316000 audit[3326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdec6dad0 a2=0 a3=1 items=0 ppid=3054 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:13.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:13.330000 audit[3326]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:13.330000 audit[3326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdec6dad0 a2=0 a3=1 items=0 ppid=3054 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:13.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:15.810697 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 02:10:15.810818 kernel: audit: type=1325 audit(1765851015.806:525): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:15.806000 audit[3328]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:15.806000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff0acea80 a2=0 a3=1 items=0 ppid=3054 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:15.815106 kernel: audit: type=1300 audit(1765851015.806:525): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff0acea80 a2=0 a3=1 items=0 ppid=3054 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:15.806000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:15.817126 kernel: audit: type=1327 audit(1765851015.806:525): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:15.815000 audit[3328]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:15.818975 kernel: audit: type=1325 audit(1765851015.815:526): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:15.815000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff0acea80 a2=0 a3=1 items=0 ppid=3054 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:15.824504 kernel: audit: type=1300 audit(1765851015.815:526): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff0acea80 a2=0 a3=1 items=0 ppid=3054 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:15.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:15.826172 kernel: audit: type=1327 audit(1765851015.815:526): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:16.839000 audit[3330]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:16.839000 audit[3330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc4fbea60 a2=0 a3=1 items=0 ppid=3054 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:16.842893 kernel: audit: type=1325 audit(1765851016.839:527): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:16.839000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:16.848207 kernel: audit: type=1300 audit(1765851016.839:527): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc4fbea60 a2=0 a3=1 items=0 ppid=3054 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:16.848273 kernel: audit: type=1327 audit(1765851016.839:527): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:16.842000 audit[3330]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:16.850295 kernel: audit: type=1325 audit(1765851016.842:528): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:16.842000 audit[3330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc4fbea60 a2=0 a3=1 items=0 ppid=3054 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:16.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:18.264000 audit[3332]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:18.264000 audit[3332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff60e7ca0 a2=0 a3=1 items=0 ppid=3054 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:18.271000 audit[3332]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:18.271000 audit[3332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff60e7ca0 a2=0 a3=1 items=0 ppid=3054 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.271000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:18.306625 systemd[1]: Created slice kubepods-besteffort-podef3409b6_7c65_46c6_94d7_8d6868e41b57.slice - libcontainer container kubepods-besteffort-podef3409b6_7c65_46c6_94d7_8d6868e41b57.slice. Dec 16 02:10:18.326617 kubelet[2916]: I1216 02:10:18.326573 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef3409b6-7c65-46c6-94d7-8d6868e41b57-tigera-ca-bundle\") pod \"calico-typha-6bf6bc5cfd-64dnq\" (UID: \"ef3409b6-7c65-46c6-94d7-8d6868e41b57\") " pod="calico-system/calico-typha-6bf6bc5cfd-64dnq" Dec 16 02:10:18.327162 kubelet[2916]: I1216 02:10:18.326711 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ef3409b6-7c65-46c6-94d7-8d6868e41b57-typha-certs\") pod \"calico-typha-6bf6bc5cfd-64dnq\" (UID: \"ef3409b6-7c65-46c6-94d7-8d6868e41b57\") " pod="calico-system/calico-typha-6bf6bc5cfd-64dnq" Dec 16 02:10:18.327162 kubelet[2916]: I1216 02:10:18.326736 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnhbh\" (UniqueName: \"kubernetes.io/projected/ef3409b6-7c65-46c6-94d7-8d6868e41b57-kube-api-access-jnhbh\") pod \"calico-typha-6bf6bc5cfd-64dnq\" (UID: \"ef3409b6-7c65-46c6-94d7-8d6868e41b57\") " pod="calico-system/calico-typha-6bf6bc5cfd-64dnq" Dec 16 02:10:18.502515 systemd[1]: Created slice kubepods-besteffort-pod2911bdd9_aa5f_49b0_b998_8e1643c2bbdf.slice - libcontainer container kubepods-besteffort-pod2911bdd9_aa5f_49b0_b998_8e1643c2bbdf.slice. Dec 16 02:10:18.528130 kubelet[2916]: I1216 02:10:18.527933 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km9n9\" (UniqueName: \"kubernetes.io/projected/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-kube-api-access-km9n9\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528130 kubelet[2916]: I1216 02:10:18.528038 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-cni-bin-dir\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528130 kubelet[2916]: I1216 02:10:18.528089 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-flexvol-driver-host\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528484 kubelet[2916]: I1216 02:10:18.528142 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-lib-modules\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528484 kubelet[2916]: I1216 02:10:18.528194 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-policysync\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528484 kubelet[2916]: I1216 02:10:18.528209 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-tigera-ca-bundle\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528484 kubelet[2916]: I1216 02:10:18.528348 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-var-run-calico\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528484 kubelet[2916]: I1216 02:10:18.528386 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-cni-net-dir\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528683 kubelet[2916]: I1216 02:10:18.528412 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-node-certs\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528683 kubelet[2916]: I1216 02:10:18.528466 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-var-lib-calico\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528683 kubelet[2916]: I1216 02:10:18.528482 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-xtables-lock\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.528683 kubelet[2916]: I1216 02:10:18.528498 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2911bdd9-aa5f-49b0-b998-8e1643c2bbdf-cni-log-dir\") pod \"calico-node-bwpl5\" (UID: \"2911bdd9-aa5f-49b0-b998-8e1643c2bbdf\") " pod="calico-system/calico-node-bwpl5" Dec 16 02:10:18.614647 containerd[1662]: time="2025-12-16T02:10:18.614609541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf6bc5cfd-64dnq,Uid:ef3409b6-7c65-46c6-94d7-8d6868e41b57,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:18.631047 kubelet[2916]: E1216 02:10:18.630917 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.631047 kubelet[2916]: W1216 02:10:18.630939 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.631494 kubelet[2916]: E1216 02:10:18.630959 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.637376 kubelet[2916]: E1216 02:10:18.637222 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.637376 kubelet[2916]: W1216 02:10:18.637375 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.637500 kubelet[2916]: E1216 02:10:18.637397 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.642831 kubelet[2916]: E1216 02:10:18.642691 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.642831 kubelet[2916]: W1216 02:10:18.642718 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.642831 kubelet[2916]: E1216 02:10:18.642742 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.649647 containerd[1662]: time="2025-12-16T02:10:18.649605846Z" level=info msg="connecting to shim 6295e2c5d831bf7966e0671e99448d1e424480a5924673f0c0c5f8b1e0378903" address="unix:///run/containerd/s/6cb55630ee750f79ab33e1495671803a759937bbac4e85b4f990585a2892dc5b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:18.671193 systemd[1]: Started cri-containerd-6295e2c5d831bf7966e0671e99448d1e424480a5924673f0c0c5f8b1e0378903.scope - libcontainer container 6295e2c5d831bf7966e0671e99448d1e424480a5924673f0c0c5f8b1e0378903. Dec 16 02:10:18.680000 audit: BPF prog-id=151 op=LOAD Dec 16 02:10:18.680000 audit: BPF prog-id=152 op=LOAD Dec 16 02:10:18.680000 audit[3358]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3348 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393565326335643833316266373936366530363731653939343438 Dec 16 02:10:18.680000 audit: BPF prog-id=152 op=UNLOAD Dec 16 02:10:18.680000 audit[3358]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393565326335643833316266373936366530363731653939343438 Dec 16 02:10:18.680000 audit: BPF prog-id=153 op=LOAD Dec 16 02:10:18.680000 audit[3358]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3348 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393565326335643833316266373936366530363731653939343438 Dec 16 02:10:18.680000 audit: BPF prog-id=154 op=LOAD Dec 16 02:10:18.680000 audit[3358]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3348 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393565326335643833316266373936366530363731653939343438 Dec 16 02:10:18.680000 audit: BPF prog-id=154 op=UNLOAD Dec 16 02:10:18.680000 audit[3358]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393565326335643833316266373936366530363731653939343438 Dec 16 02:10:18.680000 audit: BPF prog-id=153 op=UNLOAD Dec 16 02:10:18.680000 audit[3358]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393565326335643833316266373936366530363731653939343438 Dec 16 02:10:18.680000 audit: BPF prog-id=155 op=LOAD Dec 16 02:10:18.680000 audit[3358]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3348 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632393565326335643833316266373936366530363731653939343438 Dec 16 02:10:18.704478 containerd[1662]: time="2025-12-16T02:10:18.704435771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf6bc5cfd-64dnq,Uid:ef3409b6-7c65-46c6-94d7-8d6868e41b57,Namespace:calico-system,Attempt:0,} returns sandbox id \"6295e2c5d831bf7966e0671e99448d1e424480a5924673f0c0c5f8b1e0378903\"" Dec 16 02:10:18.706010 containerd[1662]: time="2025-12-16T02:10:18.705973055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 02:10:18.780511 kubelet[2916]: E1216 02:10:18.780376 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:18.808353 containerd[1662]: time="2025-12-16T02:10:18.808297562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bwpl5,Uid:2911bdd9-aa5f-49b0-b998-8e1643c2bbdf,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:18.811871 kubelet[2916]: E1216 02:10:18.811843 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.811943 kubelet[2916]: W1216 02:10:18.811882 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.811943 kubelet[2916]: E1216 02:10:18.811903 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.812183 kubelet[2916]: E1216 02:10:18.812160 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.812253 kubelet[2916]: W1216 02:10:18.812187 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.812280 kubelet[2916]: E1216 02:10:18.812253 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.812435 kubelet[2916]: E1216 02:10:18.812424 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.812435 kubelet[2916]: W1216 02:10:18.812434 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.812489 kubelet[2916]: E1216 02:10:18.812448 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.812611 kubelet[2916]: E1216 02:10:18.812572 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.812653 kubelet[2916]: W1216 02:10:18.812611 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.812653 kubelet[2916]: E1216 02:10:18.812632 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.812794 kubelet[2916]: E1216 02:10:18.812779 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.812836 kubelet[2916]: W1216 02:10:18.812798 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.812836 kubelet[2916]: E1216 02:10:18.812809 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.812971 kubelet[2916]: E1216 02:10:18.812959 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.812971 kubelet[2916]: W1216 02:10:18.812970 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.813031 kubelet[2916]: E1216 02:10:18.812978 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.813108 kubelet[2916]: E1216 02:10:18.813096 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.813108 kubelet[2916]: W1216 02:10:18.813105 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.813165 kubelet[2916]: E1216 02:10:18.813113 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.813244 kubelet[2916]: E1216 02:10:18.813230 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.813244 kubelet[2916]: W1216 02:10:18.813242 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.813285 kubelet[2916]: E1216 02:10:18.813250 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.813397 kubelet[2916]: E1216 02:10:18.813384 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.813397 kubelet[2916]: W1216 02:10:18.813394 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.813446 kubelet[2916]: E1216 02:10:18.813402 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.813546 kubelet[2916]: E1216 02:10:18.813534 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.813546 kubelet[2916]: W1216 02:10:18.813544 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.813600 kubelet[2916]: E1216 02:10:18.813551 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.813675 kubelet[2916]: E1216 02:10:18.813664 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.813675 kubelet[2916]: W1216 02:10:18.813674 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.813730 kubelet[2916]: E1216 02:10:18.813681 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.813824 kubelet[2916]: E1216 02:10:18.813812 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.813824 kubelet[2916]: W1216 02:10:18.813822 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.813824 kubelet[2916]: E1216 02:10:18.813829 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.813989 kubelet[2916]: E1216 02:10:18.813977 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.813989 kubelet[2916]: W1216 02:10:18.813987 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.814040 kubelet[2916]: E1216 02:10:18.813996 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.814159 kubelet[2916]: E1216 02:10:18.814148 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.814188 kubelet[2916]: W1216 02:10:18.814159 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.814188 kubelet[2916]: E1216 02:10:18.814167 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.814306 kubelet[2916]: E1216 02:10:18.814294 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.814306 kubelet[2916]: W1216 02:10:18.814304 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.814306 kubelet[2916]: E1216 02:10:18.814312 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.814441 kubelet[2916]: E1216 02:10:18.814429 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.814474 kubelet[2916]: W1216 02:10:18.814441 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.814474 kubelet[2916]: E1216 02:10:18.814449 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.814600 kubelet[2916]: E1216 02:10:18.814583 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.814600 kubelet[2916]: W1216 02:10:18.814593 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.814646 kubelet[2916]: E1216 02:10:18.814602 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.814728 kubelet[2916]: E1216 02:10:18.814718 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.814728 kubelet[2916]: W1216 02:10:18.814728 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.814785 kubelet[2916]: E1216 02:10:18.814735 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.814898 kubelet[2916]: E1216 02:10:18.814846 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.814898 kubelet[2916]: W1216 02:10:18.814853 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.814898 kubelet[2916]: E1216 02:10:18.814875 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.815018 kubelet[2916]: E1216 02:10:18.815006 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.815018 kubelet[2916]: W1216 02:10:18.815016 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.815073 kubelet[2916]: E1216 02:10:18.815023 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.828970 containerd[1662]: time="2025-12-16T02:10:18.828924664Z" level=info msg="connecting to shim a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17" address="unix:///run/containerd/s/fc0363d8f6bc8b2ab671e1a3f507dfabdba26b650402689ed0358dd49aa59028" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:18.830446 kubelet[2916]: E1216 02:10:18.830421 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.830446 kubelet[2916]: W1216 02:10:18.830444 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.830562 kubelet[2916]: E1216 02:10:18.830464 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.830562 kubelet[2916]: I1216 02:10:18.830490 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lrf6\" (UniqueName: \"kubernetes.io/projected/451fe39a-cf01-4312-91ac-f3d2c7b640b1-kube-api-access-8lrf6\") pod \"csi-node-driver-sgqkd\" (UID: \"451fe39a-cf01-4312-91ac-f3d2c7b640b1\") " pod="calico-system/csi-node-driver-sgqkd" Dec 16 02:10:18.830729 kubelet[2916]: E1216 02:10:18.830716 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.830774 kubelet[2916]: W1216 02:10:18.830729 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.830774 kubelet[2916]: E1216 02:10:18.830739 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.830774 kubelet[2916]: I1216 02:10:18.830758 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/451fe39a-cf01-4312-91ac-f3d2c7b640b1-kubelet-dir\") pod \"csi-node-driver-sgqkd\" (UID: \"451fe39a-cf01-4312-91ac-f3d2c7b640b1\") " pod="calico-system/csi-node-driver-sgqkd" Dec 16 02:10:18.832365 kubelet[2916]: E1216 02:10:18.832344 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.832365 kubelet[2916]: W1216 02:10:18.832363 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.832462 kubelet[2916]: E1216 02:10:18.832377 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.832585 kubelet[2916]: E1216 02:10:18.832570 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.832585 kubelet[2916]: W1216 02:10:18.832584 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.832647 kubelet[2916]: E1216 02:10:18.832594 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.832831 kubelet[2916]: E1216 02:10:18.832811 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.832831 kubelet[2916]: W1216 02:10:18.832825 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.832922 kubelet[2916]: E1216 02:10:18.832837 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.833192 kubelet[2916]: I1216 02:10:18.833043 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/451fe39a-cf01-4312-91ac-f3d2c7b640b1-varrun\") pod \"csi-node-driver-sgqkd\" (UID: \"451fe39a-cf01-4312-91ac-f3d2c7b640b1\") " pod="calico-system/csi-node-driver-sgqkd" Dec 16 02:10:18.833264 kubelet[2916]: E1216 02:10:18.833245 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.833264 kubelet[2916]: W1216 02:10:18.833259 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.833350 kubelet[2916]: E1216 02:10:18.833270 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.833508 kubelet[2916]: E1216 02:10:18.833492 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.833508 kubelet[2916]: W1216 02:10:18.833506 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.833578 kubelet[2916]: E1216 02:10:18.833518 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.834947 kubelet[2916]: E1216 02:10:18.834924 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.834947 kubelet[2916]: W1216 02:10:18.834946 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.835031 kubelet[2916]: E1216 02:10:18.834961 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.835031 kubelet[2916]: I1216 02:10:18.834992 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/451fe39a-cf01-4312-91ac-f3d2c7b640b1-socket-dir\") pod \"csi-node-driver-sgqkd\" (UID: \"451fe39a-cf01-4312-91ac-f3d2c7b640b1\") " pod="calico-system/csi-node-driver-sgqkd" Dec 16 02:10:18.835234 kubelet[2916]: E1216 02:10:18.835212 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.835234 kubelet[2916]: W1216 02:10:18.835233 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.835291 kubelet[2916]: E1216 02:10:18.835246 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.835417 kubelet[2916]: E1216 02:10:18.835402 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.835454 kubelet[2916]: W1216 02:10:18.835418 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.835454 kubelet[2916]: E1216 02:10:18.835431 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.835678 kubelet[2916]: E1216 02:10:18.835663 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.835678 kubelet[2916]: W1216 02:10:18.835677 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.835730 kubelet[2916]: E1216 02:10:18.835687 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.835730 kubelet[2916]: I1216 02:10:18.835709 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/451fe39a-cf01-4312-91ac-f3d2c7b640b1-registration-dir\") pod \"csi-node-driver-sgqkd\" (UID: \"451fe39a-cf01-4312-91ac-f3d2c7b640b1\") " pod="calico-system/csi-node-driver-sgqkd" Dec 16 02:10:18.835968 kubelet[2916]: E1216 02:10:18.835942 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.835968 kubelet[2916]: W1216 02:10:18.835962 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.836045 kubelet[2916]: E1216 02:10:18.835976 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.836204 kubelet[2916]: E1216 02:10:18.836184 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.836204 kubelet[2916]: W1216 02:10:18.836201 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.836245 kubelet[2916]: E1216 02:10:18.836211 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.836479 kubelet[2916]: E1216 02:10:18.836462 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.836479 kubelet[2916]: W1216 02:10:18.836478 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.836533 kubelet[2916]: E1216 02:10:18.836490 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.836693 kubelet[2916]: E1216 02:10:18.836676 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.836693 kubelet[2916]: W1216 02:10:18.836692 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.836751 kubelet[2916]: E1216 02:10:18.836703 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.856303 systemd[1]: Started cri-containerd-a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17.scope - libcontainer container a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17. Dec 16 02:10:18.865000 audit: BPF prog-id=156 op=LOAD Dec 16 02:10:18.866000 audit: BPF prog-id=157 op=LOAD Dec 16 02:10:18.866000 audit[3445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3424 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464343330303231323031396636386465383232313139356161 Dec 16 02:10:18.866000 audit: BPF prog-id=157 op=UNLOAD Dec 16 02:10:18.866000 audit[3445]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464343330303231323031396636386465383232313139356161 Dec 16 02:10:18.866000 audit: BPF prog-id=158 op=LOAD Dec 16 02:10:18.866000 audit[3445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3424 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464343330303231323031396636386465383232313139356161 Dec 16 02:10:18.866000 audit: BPF prog-id=159 op=LOAD Dec 16 02:10:18.866000 audit[3445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3424 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464343330303231323031396636386465383232313139356161 Dec 16 02:10:18.866000 audit: BPF prog-id=159 op=UNLOAD Dec 16 02:10:18.866000 audit[3445]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464343330303231323031396636386465383232313139356161 Dec 16 02:10:18.866000 audit: BPF prog-id=158 op=UNLOAD Dec 16 02:10:18.866000 audit[3445]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464343330303231323031396636386465383232313139356161 Dec 16 02:10:18.866000 audit: BPF prog-id=160 op=LOAD Dec 16 02:10:18.866000 audit[3445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3424 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464343330303231323031396636386465383232313139356161 Dec 16 02:10:18.883644 containerd[1662]: time="2025-12-16T02:10:18.883603668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bwpl5,Uid:2911bdd9-aa5f-49b0-b998-8e1643c2bbdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17\"" Dec 16 02:10:18.936532 kubelet[2916]: E1216 02:10:18.936488 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.936532 kubelet[2916]: W1216 02:10:18.936512 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.936532 kubelet[2916]: E1216 02:10:18.936531 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.936779 kubelet[2916]: E1216 02:10:18.936745 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.936779 kubelet[2916]: W1216 02:10:18.936757 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.936779 kubelet[2916]: E1216 02:10:18.936767 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.936968 kubelet[2916]: E1216 02:10:18.936948 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.936968 kubelet[2916]: W1216 02:10:18.936956 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.936968 kubelet[2916]: E1216 02:10:18.936964 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.937634 kubelet[2916]: E1216 02:10:18.937395 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.937634 kubelet[2916]: W1216 02:10:18.937411 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.937634 kubelet[2916]: E1216 02:10:18.937426 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.938022 kubelet[2916]: E1216 02:10:18.937948 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.938114 kubelet[2916]: W1216 02:10:18.938101 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.938265 kubelet[2916]: E1216 02:10:18.938158 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.938879 kubelet[2916]: E1216 02:10:18.938826 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.939055 kubelet[2916]: W1216 02:10:18.938951 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.939055 kubelet[2916]: E1216 02:10:18.938970 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.939482 kubelet[2916]: E1216 02:10:18.939453 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.939645 kubelet[2916]: W1216 02:10:18.939540 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.939645 kubelet[2916]: E1216 02:10:18.939566 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.940068 kubelet[2916]: E1216 02:10:18.940052 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.940224 kubelet[2916]: W1216 02:10:18.940135 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.940224 kubelet[2916]: E1216 02:10:18.940151 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.940489 kubelet[2916]: E1216 02:10:18.940476 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.940644 kubelet[2916]: W1216 02:10:18.940544 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.940644 kubelet[2916]: E1216 02:10:18.940562 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.940905 kubelet[2916]: E1216 02:10:18.940891 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.941062 kubelet[2916]: W1216 02:10:18.940966 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.941062 kubelet[2916]: E1216 02:10:18.940981 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.941340 kubelet[2916]: E1216 02:10:18.941326 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.941428 kubelet[2916]: W1216 02:10:18.941415 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.941428 kubelet[2916]: E1216 02:10:18.941455 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.941744 kubelet[2916]: E1216 02:10:18.941730 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.941829 kubelet[2916]: W1216 02:10:18.941817 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.941915 kubelet[2916]: E1216 02:10:18.941903 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.942380 kubelet[2916]: E1216 02:10:18.942229 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.942380 kubelet[2916]: W1216 02:10:18.942246 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.942380 kubelet[2916]: E1216 02:10:18.942257 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.942720 kubelet[2916]: E1216 02:10:18.942704 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.942918 kubelet[2916]: W1216 02:10:18.942897 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.943330 kubelet[2916]: E1216 02:10:18.943092 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.944314 kubelet[2916]: E1216 02:10:18.944221 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.944314 kubelet[2916]: W1216 02:10:18.944242 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.944314 kubelet[2916]: E1216 02:10:18.944258 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.944901 kubelet[2916]: E1216 02:10:18.944644 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.944901 kubelet[2916]: W1216 02:10:18.944662 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.944901 kubelet[2916]: E1216 02:10:18.944674 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.945548 kubelet[2916]: E1216 02:10:18.945502 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.945548 kubelet[2916]: W1216 02:10:18.945519 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.945548 kubelet[2916]: E1216 02:10:18.945533 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.945742 kubelet[2916]: E1216 02:10:18.945725 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.945742 kubelet[2916]: W1216 02:10:18.945737 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.945956 kubelet[2916]: E1216 02:10:18.945747 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.946073 kubelet[2916]: E1216 02:10:18.946053 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.946073 kubelet[2916]: W1216 02:10:18.946069 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.946123 kubelet[2916]: E1216 02:10:18.946081 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.946392 kubelet[2916]: E1216 02:10:18.946340 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.946392 kubelet[2916]: W1216 02:10:18.946389 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.946771 kubelet[2916]: E1216 02:10:18.946401 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.946771 kubelet[2916]: E1216 02:10:18.946623 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.946771 kubelet[2916]: W1216 02:10:18.946633 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.946771 kubelet[2916]: E1216 02:10:18.946642 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.946854 kubelet[2916]: E1216 02:10:18.946818 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.946854 kubelet[2916]: W1216 02:10:18.946826 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.946854 kubelet[2916]: E1216 02:10:18.946835 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.948123 kubelet[2916]: E1216 02:10:18.948087 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.948123 kubelet[2916]: W1216 02:10:18.948106 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.948123 kubelet[2916]: E1216 02:10:18.948119 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.948435 kubelet[2916]: E1216 02:10:18.948391 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.948435 kubelet[2916]: W1216 02:10:18.948404 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.948435 kubelet[2916]: E1216 02:10:18.948414 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.948634 kubelet[2916]: E1216 02:10:18.948607 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.948661 kubelet[2916]: W1216 02:10:18.948633 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.948661 kubelet[2916]: E1216 02:10:18.948645 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:18.952406 kubelet[2916]: E1216 02:10:18.952371 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:18.952406 kubelet[2916]: W1216 02:10:18.952388 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:18.952406 kubelet[2916]: E1216 02:10:18.952401 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:19.282000 audit[3505]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:19.282000 audit[3505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff5235b00 a2=0 a3=1 items=0 ppid=3054 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:19.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:19.294000 audit[3505]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:19.294000 audit[3505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5235b00 a2=0 a3=1 items=0 ppid=3054 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:19.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:20.143303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242709691.mount: Deactivated successfully. Dec 16 02:10:20.422199 kubelet[2916]: E1216 02:10:20.421646 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:21.054956 containerd[1662]: time="2025-12-16T02:10:21.054895466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:21.056253 containerd[1662]: time="2025-12-16T02:10:21.055997549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 02:10:21.057242 containerd[1662]: time="2025-12-16T02:10:21.057202473Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:21.059829 containerd[1662]: time="2025-12-16T02:10:21.059768800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:21.060392 containerd[1662]: time="2025-12-16T02:10:21.060368002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.354360827s" Dec 16 02:10:21.060435 containerd[1662]: time="2025-12-16T02:10:21.060396722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 02:10:21.061400 containerd[1662]: time="2025-12-16T02:10:21.061361405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 02:10:21.070333 containerd[1662]: time="2025-12-16T02:10:21.070292712Z" level=info msg="CreateContainer within sandbox \"6295e2c5d831bf7966e0671e99448d1e424480a5924673f0c0c5f8b1e0378903\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 02:10:21.079259 containerd[1662]: time="2025-12-16T02:10:21.079213459Z" level=info msg="Container 4a6bf63fac333388d731ac2eb5b0c7dff5b2aabb73c25e3dc16b593e15d3938b: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:21.088962 containerd[1662]: time="2025-12-16T02:10:21.088923008Z" level=info msg="CreateContainer within sandbox \"6295e2c5d831bf7966e0671e99448d1e424480a5924673f0c0c5f8b1e0378903\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a6bf63fac333388d731ac2eb5b0c7dff5b2aabb73c25e3dc16b593e15d3938b\"" Dec 16 02:10:21.089723 containerd[1662]: time="2025-12-16T02:10:21.089681330Z" level=info msg="StartContainer for \"4a6bf63fac333388d731ac2eb5b0c7dff5b2aabb73c25e3dc16b593e15d3938b\"" Dec 16 02:10:21.091033 containerd[1662]: time="2025-12-16T02:10:21.090996014Z" level=info msg="connecting to shim 4a6bf63fac333388d731ac2eb5b0c7dff5b2aabb73c25e3dc16b593e15d3938b" address="unix:///run/containerd/s/6cb55630ee750f79ab33e1495671803a759937bbac4e85b4f990585a2892dc5b" protocol=ttrpc version=3 Dec 16 02:10:21.114320 systemd[1]: Started cri-containerd-4a6bf63fac333388d731ac2eb5b0c7dff5b2aabb73c25e3dc16b593e15d3938b.scope - libcontainer container 4a6bf63fac333388d731ac2eb5b0c7dff5b2aabb73c25e3dc16b593e15d3938b. Dec 16 02:10:21.125000 audit: BPF prog-id=161 op=LOAD Dec 16 02:10:21.127060 kernel: kauditd_printk_skb: 58 callbacks suppressed Dec 16 02:10:21.127117 kernel: audit: type=1334 audit(1765851021.125:549): prog-id=161 op=LOAD Dec 16 02:10:21.125000 audit: BPF prog-id=162 op=LOAD Dec 16 02:10:21.128520 kernel: audit: type=1334 audit(1765851021.125:550): prog-id=162 op=LOAD Dec 16 02:10:21.125000 audit[3516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.131767 kernel: audit: type=1300 audit(1765851021.125:550): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.131836 kernel: audit: type=1327 audit(1765851021.125:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.126000 audit: BPF prog-id=162 op=UNLOAD Dec 16 02:10:21.126000 audit[3516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.138752 kernel: audit: type=1334 audit(1765851021.126:551): prog-id=162 op=UNLOAD Dec 16 02:10:21.138815 kernel: audit: type=1300 audit(1765851021.126:551): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.138834 kernel: audit: type=1327 audit(1765851021.126:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.141825 kernel: audit: type=1334 audit(1765851021.126:552): prog-id=163 op=LOAD Dec 16 02:10:21.126000 audit: BPF prog-id=163 op=LOAD Dec 16 02:10:21.126000 audit[3516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.145971 kernel: audit: type=1300 audit(1765851021.126:552): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.146090 kernel: audit: type=1327 audit(1765851021.126:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.126000 audit: BPF prog-id=164 op=LOAD Dec 16 02:10:21.126000 audit[3516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.127000 audit: BPF prog-id=164 op=UNLOAD Dec 16 02:10:21.127000 audit[3516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.127000 audit: BPF prog-id=163 op=UNLOAD Dec 16 02:10:21.127000 audit[3516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.127000 audit: BPF prog-id=165 op=LOAD Dec 16 02:10:21.127000 audit[3516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3348 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461366266363366616333333333383864373331616332656235623063 Dec 16 02:10:21.173293 containerd[1662]: time="2025-12-16T02:10:21.173241301Z" level=info msg="StartContainer for \"4a6bf63fac333388d731ac2eb5b0c7dff5b2aabb73c25e3dc16b593e15d3938b\" returns successfully" Dec 16 02:10:21.494952 kubelet[2916]: I1216 02:10:21.494845 2916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bf6bc5cfd-64dnq" podStartSLOduration=1.139148195 podStartE2EDuration="3.494819586s" podCreationTimestamp="2025-12-16 02:10:18 +0000 UTC" firstStartedPulling="2025-12-16 02:10:18.705570334 +0000 UTC m=+20.373605476" lastFinishedPulling="2025-12-16 02:10:21.061241725 +0000 UTC m=+22.729276867" observedRunningTime="2025-12-16 02:10:21.494602866 +0000 UTC m=+23.162638008" watchObservedRunningTime="2025-12-16 02:10:21.494819586 +0000 UTC m=+23.162854728" Dec 16 02:10:21.530315 kubelet[2916]: E1216 02:10:21.530271 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.530315 kubelet[2916]: W1216 02:10:21.530317 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.530502 kubelet[2916]: E1216 02:10:21.530357 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.530661 kubelet[2916]: E1216 02:10:21.530649 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.530706 kubelet[2916]: W1216 02:10:21.530660 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.530732 kubelet[2916]: E1216 02:10:21.530707 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.530871 kubelet[2916]: E1216 02:10:21.530848 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.530899 kubelet[2916]: W1216 02:10:21.530871 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.530899 kubelet[2916]: E1216 02:10:21.530882 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.531029 kubelet[2916]: E1216 02:10:21.531018 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.531029 kubelet[2916]: W1216 02:10:21.531028 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.531087 kubelet[2916]: E1216 02:10:21.531036 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.531248 kubelet[2916]: E1216 02:10:21.531236 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.531248 kubelet[2916]: W1216 02:10:21.531248 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.531330 kubelet[2916]: E1216 02:10:21.531257 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.531403 kubelet[2916]: E1216 02:10:21.531392 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.531432 kubelet[2916]: W1216 02:10:21.531403 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.531432 kubelet[2916]: E1216 02:10:21.531411 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.531539 kubelet[2916]: E1216 02:10:21.531529 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.531570 kubelet[2916]: W1216 02:10:21.531541 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.531570 kubelet[2916]: E1216 02:10:21.531549 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.531677 kubelet[2916]: E1216 02:10:21.531666 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.531677 kubelet[2916]: W1216 02:10:21.531676 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.531730 kubelet[2916]: E1216 02:10:21.531683 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.531823 kubelet[2916]: E1216 02:10:21.531812 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.531823 kubelet[2916]: W1216 02:10:21.531822 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.531904 kubelet[2916]: E1216 02:10:21.531829 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.531970 kubelet[2916]: E1216 02:10:21.531959 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.531970 kubelet[2916]: W1216 02:10:21.531969 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.532038 kubelet[2916]: E1216 02:10:21.531977 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.532107 kubelet[2916]: E1216 02:10:21.532092 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.532107 kubelet[2916]: W1216 02:10:21.532102 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.532173 kubelet[2916]: E1216 02:10:21.532109 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.532258 kubelet[2916]: E1216 02:10:21.532248 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.532258 kubelet[2916]: W1216 02:10:21.532258 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.532324 kubelet[2916]: E1216 02:10:21.532266 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.532400 kubelet[2916]: E1216 02:10:21.532389 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.532400 kubelet[2916]: W1216 02:10:21.532399 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.532467 kubelet[2916]: E1216 02:10:21.532406 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.532537 kubelet[2916]: E1216 02:10:21.532526 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.532537 kubelet[2916]: W1216 02:10:21.532536 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.532589 kubelet[2916]: E1216 02:10:21.532543 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.532699 kubelet[2916]: E1216 02:10:21.532689 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.532738 kubelet[2916]: W1216 02:10:21.532700 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.532738 kubelet[2916]: E1216 02:10:21.532708 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.559306 kubelet[2916]: E1216 02:10:21.559170 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.559306 kubelet[2916]: W1216 02:10:21.559189 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.559306 kubelet[2916]: E1216 02:10:21.559204 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.559536 kubelet[2916]: E1216 02:10:21.559524 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.559601 kubelet[2916]: W1216 02:10:21.559590 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.559648 kubelet[2916]: E1216 02:10:21.559639 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.559964 kubelet[2916]: E1216 02:10:21.559899 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.559964 kubelet[2916]: W1216 02:10:21.559910 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.559964 kubelet[2916]: E1216 02:10:21.559921 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.560181 kubelet[2916]: E1216 02:10:21.560152 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.560181 kubelet[2916]: W1216 02:10:21.560171 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.560233 kubelet[2916]: E1216 02:10:21.560184 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.560342 kubelet[2916]: E1216 02:10:21.560328 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.560342 kubelet[2916]: W1216 02:10:21.560338 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.560388 kubelet[2916]: E1216 02:10:21.560347 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.560494 kubelet[2916]: E1216 02:10:21.560483 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.560523 kubelet[2916]: W1216 02:10:21.560494 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.560523 kubelet[2916]: E1216 02:10:21.560502 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.560683 kubelet[2916]: E1216 02:10:21.560671 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.560683 kubelet[2916]: W1216 02:10:21.560681 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.560726 kubelet[2916]: E1216 02:10:21.560690 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.560961 kubelet[2916]: E1216 02:10:21.560941 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.561045 kubelet[2916]: W1216 02:10:21.560960 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.561045 kubelet[2916]: E1216 02:10:21.560973 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.561165 kubelet[2916]: E1216 02:10:21.561148 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.561165 kubelet[2916]: W1216 02:10:21.561162 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.561226 kubelet[2916]: E1216 02:10:21.561173 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.561339 kubelet[2916]: E1216 02:10:21.561327 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.561339 kubelet[2916]: W1216 02:10:21.561338 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.561401 kubelet[2916]: E1216 02:10:21.561346 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.561479 kubelet[2916]: E1216 02:10:21.561469 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.561479 kubelet[2916]: W1216 02:10:21.561478 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.561557 kubelet[2916]: E1216 02:10:21.561486 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.561645 kubelet[2916]: E1216 02:10:21.561632 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.561645 kubelet[2916]: W1216 02:10:21.561645 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.561706 kubelet[2916]: E1216 02:10:21.561654 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.561810 kubelet[2916]: E1216 02:10:21.561798 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.561810 kubelet[2916]: W1216 02:10:21.561808 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.561894 kubelet[2916]: E1216 02:10:21.561816 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.561965 kubelet[2916]: E1216 02:10:21.561953 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.561965 kubelet[2916]: W1216 02:10:21.561963 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.562010 kubelet[2916]: E1216 02:10:21.561971 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.562148 kubelet[2916]: E1216 02:10:21.562136 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.562148 kubelet[2916]: W1216 02:10:21.562145 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.562199 kubelet[2916]: E1216 02:10:21.562153 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.562465 kubelet[2916]: E1216 02:10:21.562448 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.562505 kubelet[2916]: W1216 02:10:21.562475 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.562505 kubelet[2916]: E1216 02:10:21.562488 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.562802 kubelet[2916]: E1216 02:10:21.562784 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.562802 kubelet[2916]: W1216 02:10:21.562800 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.562892 kubelet[2916]: E1216 02:10:21.562812 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:21.563278 kubelet[2916]: E1216 02:10:21.563260 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:21.563314 kubelet[2916]: W1216 02:10:21.563277 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:21.563314 kubelet[2916]: E1216 02:10:21.563290 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.421834 kubelet[2916]: E1216 02:10:22.421770 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:22.486888 kubelet[2916]: I1216 02:10:22.486523 2916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 02:10:22.525165 containerd[1662]: time="2025-12-16T02:10:22.525084959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:22.526623 containerd[1662]: time="2025-12-16T02:10:22.526307123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:22.528343 containerd[1662]: time="2025-12-16T02:10:22.528281048Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:22.530606 containerd[1662]: time="2025-12-16T02:10:22.530573335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:22.531279 containerd[1662]: time="2025-12-16T02:10:22.531237817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.469835452s" Dec 16 02:10:22.531279 containerd[1662]: time="2025-12-16T02:10:22.531278017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 02:10:22.537043 containerd[1662]: time="2025-12-16T02:10:22.537013795Z" level=info msg="CreateContainer within sandbox \"a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 02:10:22.539714 kubelet[2916]: E1216 02:10:22.539655 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.539714 kubelet[2916]: W1216 02:10:22.539698 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.540098 kubelet[2916]: E1216 02:10:22.539730 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.540098 kubelet[2916]: E1216 02:10:22.539879 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.540098 kubelet[2916]: W1216 02:10:22.539894 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.540098 kubelet[2916]: E1216 02:10:22.539902 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.540098 kubelet[2916]: E1216 02:10:22.540049 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.540098 kubelet[2916]: W1216 02:10:22.540057 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.540098 kubelet[2916]: E1216 02:10:22.540066 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.540315 kubelet[2916]: E1216 02:10:22.540231 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.540315 kubelet[2916]: W1216 02:10:22.540238 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.540315 kubelet[2916]: E1216 02:10:22.540253 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.540443 kubelet[2916]: E1216 02:10:22.540418 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.540443 kubelet[2916]: W1216 02:10:22.540442 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.540500 kubelet[2916]: E1216 02:10:22.540452 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.540606 kubelet[2916]: E1216 02:10:22.540578 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.540606 kubelet[2916]: W1216 02:10:22.540603 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.540707 kubelet[2916]: E1216 02:10:22.540612 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.540795 kubelet[2916]: E1216 02:10:22.540779 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.540795 kubelet[2916]: W1216 02:10:22.540792 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.540898 kubelet[2916]: E1216 02:10:22.540801 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.541036 kubelet[2916]: E1216 02:10:22.540949 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.541036 kubelet[2916]: W1216 02:10:22.540960 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.541036 kubelet[2916]: E1216 02:10:22.540969 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.541128 kubelet[2916]: E1216 02:10:22.541098 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.541128 kubelet[2916]: W1216 02:10:22.541105 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.541128 kubelet[2916]: E1216 02:10:22.541112 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.541291 kubelet[2916]: E1216 02:10:22.541282 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.541291 kubelet[2916]: W1216 02:10:22.541291 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.541336 kubelet[2916]: E1216 02:10:22.541299 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.541585 kubelet[2916]: E1216 02:10:22.541541 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.541585 kubelet[2916]: W1216 02:10:22.541552 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.541585 kubelet[2916]: E1216 02:10:22.541560 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.541818 kubelet[2916]: E1216 02:10:22.541783 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.541818 kubelet[2916]: W1216 02:10:22.541818 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.542023 kubelet[2916]: E1216 02:10:22.542003 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.542259 kubelet[2916]: E1216 02:10:22.542196 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.542259 kubelet[2916]: W1216 02:10:22.542209 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.542259 kubelet[2916]: E1216 02:10:22.542220 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.542391 kubelet[2916]: E1216 02:10:22.542374 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.542391 kubelet[2916]: W1216 02:10:22.542385 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.542443 kubelet[2916]: E1216 02:10:22.542393 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.542586 kubelet[2916]: E1216 02:10:22.542513 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.542586 kubelet[2916]: W1216 02:10:22.542519 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.542586 kubelet[2916]: E1216 02:10:22.542529 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.568038 kubelet[2916]: E1216 02:10:22.568003 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.568038 kubelet[2916]: W1216 02:10:22.568040 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.568725 kubelet[2916]: E1216 02:10:22.568057 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.568725 kubelet[2916]: E1216 02:10:22.568245 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.568725 kubelet[2916]: W1216 02:10:22.568254 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.568725 kubelet[2916]: E1216 02:10:22.568271 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.568725 kubelet[2916]: E1216 02:10:22.568467 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.568725 kubelet[2916]: W1216 02:10:22.568475 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.568725 kubelet[2916]: E1216 02:10:22.568484 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.569295 kubelet[2916]: E1216 02:10:22.568778 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.569295 kubelet[2916]: W1216 02:10:22.568788 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.569295 kubelet[2916]: E1216 02:10:22.568798 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.569295 kubelet[2916]: E1216 02:10:22.568971 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.569295 kubelet[2916]: W1216 02:10:22.568980 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.569295 kubelet[2916]: E1216 02:10:22.568988 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.569295 kubelet[2916]: E1216 02:10:22.569248 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.569295 kubelet[2916]: W1216 02:10:22.569259 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.569295 kubelet[2916]: E1216 02:10:22.569268 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.570414 kubelet[2916]: E1216 02:10:22.569476 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.570414 kubelet[2916]: W1216 02:10:22.569485 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.570414 kubelet[2916]: E1216 02:10:22.569494 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.570414 kubelet[2916]: E1216 02:10:22.569751 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.570414 kubelet[2916]: W1216 02:10:22.569776 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.570414 kubelet[2916]: E1216 02:10:22.569787 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.570414 kubelet[2916]: E1216 02:10:22.570109 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.570414 kubelet[2916]: W1216 02:10:22.570120 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.570414 kubelet[2916]: E1216 02:10:22.570131 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.570414 kubelet[2916]: E1216 02:10:22.570375 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.570598 kubelet[2916]: W1216 02:10:22.570383 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.570598 kubelet[2916]: E1216 02:10:22.570395 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.570598 kubelet[2916]: E1216 02:10:22.570562 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.570598 kubelet[2916]: W1216 02:10:22.570569 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.570598 kubelet[2916]: E1216 02:10:22.570595 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.571095 kubelet[2916]: E1216 02:10:22.571057 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.571095 kubelet[2916]: W1216 02:10:22.571074 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.571095 kubelet[2916]: E1216 02:10:22.571087 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.571871 kubelet[2916]: E1216 02:10:22.571792 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.571871 kubelet[2916]: W1216 02:10:22.571820 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.571871 kubelet[2916]: E1216 02:10:22.571835 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.575730 kubelet[2916]: E1216 02:10:22.572055 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.575730 kubelet[2916]: W1216 02:10:22.572064 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.575730 kubelet[2916]: E1216 02:10:22.572073 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.575730 kubelet[2916]: E1216 02:10:22.572204 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.575730 kubelet[2916]: W1216 02:10:22.572210 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.575730 kubelet[2916]: E1216 02:10:22.572218 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.575730 kubelet[2916]: E1216 02:10:22.572347 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.575730 kubelet[2916]: W1216 02:10:22.572353 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.575730 kubelet[2916]: E1216 02:10:22.572362 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.575730 kubelet[2916]: E1216 02:10:22.572504 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.576029 kubelet[2916]: W1216 02:10:22.572512 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.576029 kubelet[2916]: E1216 02:10:22.572530 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.576029 kubelet[2916]: E1216 02:10:22.572836 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:10:22.576029 kubelet[2916]: W1216 02:10:22.572852 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:10:22.576029 kubelet[2916]: E1216 02:10:22.572875 2916 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:10:22.579807 containerd[1662]: time="2025-12-16T02:10:22.577073435Z" level=info msg="Container 095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:22.586069 containerd[1662]: time="2025-12-16T02:10:22.586022622Z" level=info msg="CreateContainer within sandbox \"a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b\"" Dec 16 02:10:22.586625 containerd[1662]: time="2025-12-16T02:10:22.586598824Z" level=info msg="StartContainer for \"095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b\"" Dec 16 02:10:22.588020 containerd[1662]: time="2025-12-16T02:10:22.587995428Z" level=info msg="connecting to shim 095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b" address="unix:///run/containerd/s/fc0363d8f6bc8b2ab671e1a3f507dfabdba26b650402689ed0358dd49aa59028" protocol=ttrpc version=3 Dec 16 02:10:22.608038 systemd[1]: Started cri-containerd-095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b.scope - libcontainer container 095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b. Dec 16 02:10:22.679000 audit: BPF prog-id=166 op=LOAD Dec 16 02:10:22.679000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3424 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:22.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356637363039653230316234653061373236653666333337393831 Dec 16 02:10:22.679000 audit: BPF prog-id=167 op=LOAD Dec 16 02:10:22.679000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3424 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:22.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356637363039653230316234653061373236653666333337393831 Dec 16 02:10:22.679000 audit: BPF prog-id=167 op=UNLOAD Dec 16 02:10:22.679000 audit[3628]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:22.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356637363039653230316234653061373236653666333337393831 Dec 16 02:10:22.679000 audit: BPF prog-id=166 op=UNLOAD Dec 16 02:10:22.679000 audit[3628]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:22.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356637363039653230316234653061373236653666333337393831 Dec 16 02:10:22.679000 audit: BPF prog-id=168 op=LOAD Dec 16 02:10:22.679000 audit[3628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3424 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:22.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356637363039653230316234653061373236653666333337393831 Dec 16 02:10:22.703761 containerd[1662]: time="2025-12-16T02:10:22.703115253Z" level=info msg="StartContainer for \"095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b\" returns successfully" Dec 16 02:10:22.714679 systemd[1]: cri-containerd-095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b.scope: Deactivated successfully. Dec 16 02:10:22.717520 containerd[1662]: time="2025-12-16T02:10:22.717480016Z" level=info msg="received container exit event container_id:\"095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b\" id:\"095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b\" pid:3641 exited_at:{seconds:1765851022 nanos:717063855}" Dec 16 02:10:22.718000 audit: BPF prog-id=168 op=UNLOAD Dec 16 02:10:22.737012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-095f7609e201b4e0a726e6f33798153e2417601cd4a3395576ba6007a92c1a3b-rootfs.mount: Deactivated successfully. Dec 16 02:10:24.421800 kubelet[2916]: E1216 02:10:24.421745 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:26.421085 kubelet[2916]: E1216 02:10:26.421040 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:26.500391 containerd[1662]: time="2025-12-16T02:10:26.500348091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 02:10:28.422920 kubelet[2916]: E1216 02:10:28.422855 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:30.021390 containerd[1662]: time="2025-12-16T02:10:30.021300020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:30.022739 containerd[1662]: time="2025-12-16T02:10:30.022432784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 02:10:30.023913 containerd[1662]: time="2025-12-16T02:10:30.023873748Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:30.028305 containerd[1662]: time="2025-12-16T02:10:30.028270041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:30.030199 containerd[1662]: time="2025-12-16T02:10:30.030161807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.529770516s" Dec 16 02:10:30.030199 containerd[1662]: time="2025-12-16T02:10:30.030197967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 02:10:30.033961 containerd[1662]: time="2025-12-16T02:10:30.033917218Z" level=info msg="CreateContainer within sandbox \"a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 02:10:30.044370 containerd[1662]: time="2025-12-16T02:10:30.044282889Z" level=info msg="Container 4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:30.055375 containerd[1662]: time="2025-12-16T02:10:30.055246202Z" level=info msg="CreateContainer within sandbox \"a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26\"" Dec 16 02:10:30.055944 containerd[1662]: time="2025-12-16T02:10:30.055916244Z" level=info msg="StartContainer for \"4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26\"" Dec 16 02:10:30.057522 containerd[1662]: time="2025-12-16T02:10:30.057490689Z" level=info msg="connecting to shim 4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26" address="unix:///run/containerd/s/fc0363d8f6bc8b2ab671e1a3f507dfabdba26b650402689ed0358dd49aa59028" protocol=ttrpc version=3 Dec 16 02:10:30.079231 systemd[1]: Started cri-containerd-4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26.scope - libcontainer container 4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26. Dec 16 02:10:30.140000 audit: BPF prog-id=169 op=LOAD Dec 16 02:10:30.142645 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 02:10:30.142718 kernel: audit: type=1334 audit(1765851030.140:563): prog-id=169 op=LOAD Dec 16 02:10:30.142751 kernel: audit: type=1300 audit(1765851030.140:563): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3424 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.140000 audit[3688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3424 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353237303261356232333235353464326331333538623535376463 Dec 16 02:10:30.149023 kernel: audit: type=1327 audit(1765851030.140:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353237303261356232333235353464326331333538623535376463 Dec 16 02:10:30.149355 kernel: audit: type=1334 audit(1765851030.140:564): prog-id=170 op=LOAD Dec 16 02:10:30.140000 audit: BPF prog-id=170 op=LOAD Dec 16 02:10:30.140000 audit[3688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3424 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.152986 kernel: audit: type=1300 audit(1765851030.140:564): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3424 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353237303261356232333235353464326331333538623535376463 Dec 16 02:10:30.156487 kernel: audit: type=1327 audit(1765851030.140:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353237303261356232333235353464326331333538623535376463 Dec 16 02:10:30.156656 kernel: audit: type=1334 audit(1765851030.141:565): prog-id=170 op=UNLOAD Dec 16 02:10:30.141000 audit: BPF prog-id=170 op=UNLOAD Dec 16 02:10:30.157301 kernel: audit: type=1300 audit(1765851030.141:565): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.141000 audit[3688]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353237303261356232333235353464326331333538623535376463 Dec 16 02:10:30.163446 kernel: audit: type=1327 audit(1765851030.141:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353237303261356232333235353464326331333538623535376463 Dec 16 02:10:30.163683 kernel: audit: type=1334 audit(1765851030.141:566): prog-id=169 op=UNLOAD Dec 16 02:10:30.141000 audit: BPF prog-id=169 op=UNLOAD Dec 16 02:10:30.141000 audit[3688]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353237303261356232333235353464326331333538623535376463 Dec 16 02:10:30.141000 audit: BPF prog-id=171 op=LOAD Dec 16 02:10:30.141000 audit[3688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3424 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435353237303261356232333235353464326331333538623535376463 Dec 16 02:10:30.180076 containerd[1662]: time="2025-12-16T02:10:30.180036817Z" level=info msg="StartContainer for \"4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26\" returns successfully" Dec 16 02:10:30.422514 kubelet[2916]: E1216 02:10:30.422271 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:31.450103 containerd[1662]: time="2025-12-16T02:10:31.450059509Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 02:10:31.451913 systemd[1]: cri-containerd-4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26.scope: Deactivated successfully. Dec 16 02:10:31.452233 systemd[1]: cri-containerd-4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26.scope: Consumed 476ms CPU time, 194.2M memory peak, 165.9M written to disk. Dec 16 02:10:31.453357 containerd[1662]: time="2025-12-16T02:10:31.453322159Z" level=info msg="received container exit event container_id:\"4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26\" id:\"4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26\" pid:3700 exited_at:{seconds:1765851031 nanos:453066078}" Dec 16 02:10:31.456000 audit: BPF prog-id=171 op=UNLOAD Dec 16 02:10:31.462889 kubelet[2916]: I1216 02:10:31.462840 2916 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 02:10:31.481556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4552702a5b232554d2c1358b557dcab243e44cfcb5eee6e4454e7956c8df6d26-rootfs.mount: Deactivated successfully. Dec 16 02:10:32.591055 systemd[1]: Created slice kubepods-burstable-pod4a2c8657_1432_4d11_acb7_c419b126c948.slice - libcontainer container kubepods-burstable-pod4a2c8657_1432_4d11_acb7_c419b126c948.slice. Dec 16 02:10:32.637768 kubelet[2916]: I1216 02:10:32.637718 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m5hg\" (UniqueName: \"kubernetes.io/projected/4a2c8657-1432-4d11-acb7-c419b126c948-kube-api-access-8m5hg\") pod \"coredns-66bc5c9577-55cxh\" (UID: \"4a2c8657-1432-4d11-acb7-c419b126c948\") " pod="kube-system/coredns-66bc5c9577-55cxh" Dec 16 02:10:32.637768 kubelet[2916]: I1216 02:10:32.637764 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2c8657-1432-4d11-acb7-c419b126c948-config-volume\") pod \"coredns-66bc5c9577-55cxh\" (UID: \"4a2c8657-1432-4d11-acb7-c419b126c948\") " pod="kube-system/coredns-66bc5c9577-55cxh" Dec 16 02:10:32.910842 containerd[1662]: time="2025-12-16T02:10:32.910417652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-55cxh,Uid:4a2c8657-1432-4d11-acb7-c419b126c948,Namespace:kube-system,Attempt:0,}" Dec 16 02:10:32.915751 systemd[1]: Created slice kubepods-besteffort-pod60bcca8f_0026_44f8_829f_16d201879e45.slice - libcontainer container kubepods-besteffort-pod60bcca8f_0026_44f8_829f_16d201879e45.slice. Dec 16 02:10:32.929066 systemd[1]: Created slice kubepods-burstable-pod8d5776d1_6402_4926_ae94_65987d42425c.slice - libcontainer container kubepods-burstable-pod8d5776d1_6402_4926_ae94_65987d42425c.slice. Dec 16 02:10:32.939623 kubelet[2916]: I1216 02:10:32.939571 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60bcca8f-0026-44f8-829f-16d201879e45-whisker-backend-key-pair\") pod \"whisker-54bdd96c5d-9r6ht\" (UID: \"60bcca8f-0026-44f8-829f-16d201879e45\") " pod="calico-system/whisker-54bdd96c5d-9r6ht" Dec 16 02:10:32.939623 kubelet[2916]: I1216 02:10:32.939612 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lvq\" (UniqueName: \"kubernetes.io/projected/60bcca8f-0026-44f8-829f-16d201879e45-kube-api-access-c7lvq\") pod \"whisker-54bdd96c5d-9r6ht\" (UID: \"60bcca8f-0026-44f8-829f-16d201879e45\") " pod="calico-system/whisker-54bdd96c5d-9r6ht" Dec 16 02:10:32.939623 kubelet[2916]: I1216 02:10:32.939633 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60bcca8f-0026-44f8-829f-16d201879e45-whisker-ca-bundle\") pod \"whisker-54bdd96c5d-9r6ht\" (UID: \"60bcca8f-0026-44f8-829f-16d201879e45\") " pod="calico-system/whisker-54bdd96c5d-9r6ht" Dec 16 02:10:32.940042 systemd[1]: Created slice kubepods-besteffort-pod451fe39a_cf01_4312_91ac_f3d2c7b640b1.slice - libcontainer container kubepods-besteffort-pod451fe39a_cf01_4312_91ac_f3d2c7b640b1.slice. Dec 16 02:10:32.948603 containerd[1662]: time="2025-12-16T02:10:32.948557967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgqkd,Uid:451fe39a-cf01-4312-91ac-f3d2c7b640b1,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:32.949294 systemd[1]: Created slice kubepods-besteffort-pod9fa7981b_06dd_4022_adb6_f277629bbdff.slice - libcontainer container kubepods-besteffort-pod9fa7981b_06dd_4022_adb6_f277629bbdff.slice. Dec 16 02:10:32.960885 systemd[1]: Created slice kubepods-besteffort-pod621fd2d4_81d9_4021_8cd8_39b0addbde62.slice - libcontainer container kubepods-besteffort-pod621fd2d4_81d9_4021_8cd8_39b0addbde62.slice. Dec 16 02:10:32.964691 systemd[1]: Created slice kubepods-besteffort-pod8f7f3b6e_a739_4bf5_a67c_595cf0d67494.slice - libcontainer container kubepods-besteffort-pod8f7f3b6e_a739_4bf5_a67c_595cf0d67494.slice. Dec 16 02:10:32.972812 systemd[1]: Created slice kubepods-besteffort-podbf9a6289_a4bc_424f_93a6_cea4264ad5e4.slice - libcontainer container kubepods-besteffort-podbf9a6289_a4bc_424f_93a6_cea4264ad5e4.slice. Dec 16 02:10:33.020068 containerd[1662]: time="2025-12-16T02:10:33.020018421Z" level=error msg="Failed to destroy network for sandbox \"d87cdf997a71a23667e134de88fae33603b30a96ec671a51dba0bbd7de513403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.022099 containerd[1662]: time="2025-12-16T02:10:33.022015947Z" level=error msg="Failed to destroy network for sandbox \"83eb11e136d652f4168e533c6618d1c26304517dc6241f66af34d30170e57093\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.022236 systemd[1]: run-netns-cni\x2df69c7ae3\x2d69a8\x2d65fa\x2d1a44\x2d32054e523064.mount: Deactivated successfully. Dec 16 02:10:33.023726 containerd[1662]: time="2025-12-16T02:10:33.023686672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgqkd,Uid:451fe39a-cf01-4312-91ac-f3d2c7b640b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87cdf997a71a23667e134de88fae33603b30a96ec671a51dba0bbd7de513403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.024060 kubelet[2916]: E1216 02:10:33.024023 2916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87cdf997a71a23667e134de88fae33603b30a96ec671a51dba0bbd7de513403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.024128 kubelet[2916]: E1216 02:10:33.024087 2916 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87cdf997a71a23667e134de88fae33603b30a96ec671a51dba0bbd7de513403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sgqkd" Dec 16 02:10:33.024128 kubelet[2916]: E1216 02:10:33.024109 2916 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87cdf997a71a23667e134de88fae33603b30a96ec671a51dba0bbd7de513403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sgqkd" Dec 16 02:10:33.024186 kubelet[2916]: E1216 02:10:33.024157 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d87cdf997a71a23667e134de88fae33603b30a96ec671a51dba0bbd7de513403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:33.025259 systemd[1]: run-netns-cni\x2db824ae06\x2d49e0\x2df9d3\x2d9e19\x2d2a49d2f28ca1.mount: Deactivated successfully. Dec 16 02:10:33.027626 containerd[1662]: time="2025-12-16T02:10:33.027587844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-55cxh,Uid:4a2c8657-1432-4d11-acb7-c419b126c948,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83eb11e136d652f4168e533c6618d1c26304517dc6241f66af34d30170e57093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.027851 kubelet[2916]: E1216 02:10:33.027816 2916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83eb11e136d652f4168e533c6618d1c26304517dc6241f66af34d30170e57093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.027938 kubelet[2916]: E1216 02:10:33.027875 2916 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83eb11e136d652f4168e533c6618d1c26304517dc6241f66af34d30170e57093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-55cxh" Dec 16 02:10:33.027938 kubelet[2916]: E1216 02:10:33.027893 2916 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83eb11e136d652f4168e533c6618d1c26304517dc6241f66af34d30170e57093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-55cxh" Dec 16 02:10:33.027989 kubelet[2916]: E1216 02:10:33.027941 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-55cxh_kube-system(4a2c8657-1432-4d11-acb7-c419b126c948)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-55cxh_kube-system(4a2c8657-1432-4d11-acb7-c419b126c948)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83eb11e136d652f4168e533c6618d1c26304517dc6241f66af34d30170e57093\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-55cxh" podUID="4a2c8657-1432-4d11-acb7-c419b126c948" Dec 16 02:10:33.040903 kubelet[2916]: I1216 02:10:33.040226 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/621fd2d4-81d9-4021-8cd8-39b0addbde62-goldmane-key-pair\") pod \"goldmane-7c778bb748-b8vhz\" (UID: \"621fd2d4-81d9-4021-8cd8-39b0addbde62\") " pod="calico-system/goldmane-7c778bb748-b8vhz" Dec 16 02:10:33.040903 kubelet[2916]: I1216 02:10:33.040308 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkfzg\" (UniqueName: \"kubernetes.io/projected/621fd2d4-81d9-4021-8cd8-39b0addbde62-kube-api-access-jkfzg\") pod \"goldmane-7c778bb748-b8vhz\" (UID: \"621fd2d4-81d9-4021-8cd8-39b0addbde62\") " pod="calico-system/goldmane-7c778bb748-b8vhz" Dec 16 02:10:33.040903 kubelet[2916]: I1216 02:10:33.040395 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d5776d1-6402-4926-ae94-65987d42425c-config-volume\") pod \"coredns-66bc5c9577-v65lh\" (UID: \"8d5776d1-6402-4926-ae94-65987d42425c\") " pod="kube-system/coredns-66bc5c9577-v65lh" Dec 16 02:10:33.040903 kubelet[2916]: I1216 02:10:33.040432 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldm7\" (UniqueName: \"kubernetes.io/projected/8d5776d1-6402-4926-ae94-65987d42425c-kube-api-access-tldm7\") pod \"coredns-66bc5c9577-v65lh\" (UID: \"8d5776d1-6402-4926-ae94-65987d42425c\") " pod="kube-system/coredns-66bc5c9577-v65lh" Dec 16 02:10:33.040903 kubelet[2916]: I1216 02:10:33.040451 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9dw\" (UniqueName: \"kubernetes.io/projected/9fa7981b-06dd-4022-adb6-f277629bbdff-kube-api-access-6g9dw\") pod \"calico-apiserver-6cd794ff8d-86jwz\" (UID: \"9fa7981b-06dd-4022-adb6-f277629bbdff\") " pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" Dec 16 02:10:33.041078 kubelet[2916]: I1216 02:10:33.040468 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl279\" (UniqueName: \"kubernetes.io/projected/bf9a6289-a4bc-424f-93a6-cea4264ad5e4-kube-api-access-xl279\") pod \"calico-apiserver-6cd794ff8d-6hwpw\" (UID: \"bf9a6289-a4bc-424f-93a6-cea4264ad5e4\") " pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" Dec 16 02:10:33.041078 kubelet[2916]: I1216 02:10:33.040490 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr7mv\" (UniqueName: \"kubernetes.io/projected/8f7f3b6e-a739-4bf5-a67c-595cf0d67494-kube-api-access-qr7mv\") pod \"calico-kube-controllers-687df49b8c-8hkjd\" (UID: \"8f7f3b6e-a739-4bf5-a67c-595cf0d67494\") " pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" Dec 16 02:10:33.041078 kubelet[2916]: I1216 02:10:33.040505 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf9a6289-a4bc-424f-93a6-cea4264ad5e4-calico-apiserver-certs\") pod \"calico-apiserver-6cd794ff8d-6hwpw\" (UID: \"bf9a6289-a4bc-424f-93a6-cea4264ad5e4\") " pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" Dec 16 02:10:33.041078 kubelet[2916]: I1216 02:10:33.040549 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f7f3b6e-a739-4bf5-a67c-595cf0d67494-tigera-ca-bundle\") pod \"calico-kube-controllers-687df49b8c-8hkjd\" (UID: \"8f7f3b6e-a739-4bf5-a67c-595cf0d67494\") " pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" Dec 16 02:10:33.041078 kubelet[2916]: I1216 02:10:33.040563 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9fa7981b-06dd-4022-adb6-f277629bbdff-calico-apiserver-certs\") pod \"calico-apiserver-6cd794ff8d-86jwz\" (UID: \"9fa7981b-06dd-4022-adb6-f277629bbdff\") " pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" Dec 16 02:10:33.041182 kubelet[2916]: I1216 02:10:33.040577 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621fd2d4-81d9-4021-8cd8-39b0addbde62-config\") pod \"goldmane-7c778bb748-b8vhz\" (UID: \"621fd2d4-81d9-4021-8cd8-39b0addbde62\") " pod="calico-system/goldmane-7c778bb748-b8vhz" Dec 16 02:10:33.041182 kubelet[2916]: I1216 02:10:33.040591 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/621fd2d4-81d9-4021-8cd8-39b0addbde62-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-b8vhz\" (UID: \"621fd2d4-81d9-4021-8cd8-39b0addbde62\") " pod="calico-system/goldmane-7c778bb748-b8vhz" Dec 16 02:10:33.229194 containerd[1662]: time="2025-12-16T02:10:33.229000969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54bdd96c5d-9r6ht,Uid:60bcca8f-0026-44f8-829f-16d201879e45,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:33.237996 containerd[1662]: time="2025-12-16T02:10:33.237949996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v65lh,Uid:8d5776d1-6402-4926-ae94-65987d42425c,Namespace:kube-system,Attempt:0,}" Dec 16 02:10:33.254565 containerd[1662]: time="2025-12-16T02:10:33.254512285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd794ff8d-86jwz,Uid:9fa7981b-06dd-4022-adb6-f277629bbdff,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:10:33.271887 containerd[1662]: time="2025-12-16T02:10:33.271828537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-b8vhz,Uid:621fd2d4-81d9-4021-8cd8-39b0addbde62,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:33.273925 containerd[1662]: time="2025-12-16T02:10:33.273793663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687df49b8c-8hkjd,Uid:8f7f3b6e-a739-4bf5-a67c-595cf0d67494,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:33.286482 containerd[1662]: time="2025-12-16T02:10:33.285492898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd794ff8d-6hwpw,Uid:bf9a6289-a4bc-424f-93a6-cea4264ad5e4,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:10:33.288207 containerd[1662]: time="2025-12-16T02:10:33.288156586Z" level=error msg="Failed to destroy network for sandbox \"fc0141447cbcdf450decf765cfb0066b2eb748837c9737c73a67539f34d85477\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.290663 containerd[1662]: time="2025-12-16T02:10:33.290623074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54bdd96c5d-9r6ht,Uid:60bcca8f-0026-44f8-829f-16d201879e45,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0141447cbcdf450decf765cfb0066b2eb748837c9737c73a67539f34d85477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.290878 kubelet[2916]: E1216 02:10:33.290824 2916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0141447cbcdf450decf765cfb0066b2eb748837c9737c73a67539f34d85477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.290980 kubelet[2916]: E1216 02:10:33.290890 2916 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0141447cbcdf450decf765cfb0066b2eb748837c9737c73a67539f34d85477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54bdd96c5d-9r6ht" Dec 16 02:10:33.290980 kubelet[2916]: E1216 02:10:33.290910 2916 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0141447cbcdf450decf765cfb0066b2eb748837c9737c73a67539f34d85477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54bdd96c5d-9r6ht" Dec 16 02:10:33.290980 kubelet[2916]: E1216 02:10:33.290958 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54bdd96c5d-9r6ht_calico-system(60bcca8f-0026-44f8-829f-16d201879e45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54bdd96c5d-9r6ht_calico-system(60bcca8f-0026-44f8-829f-16d201879e45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc0141447cbcdf450decf765cfb0066b2eb748837c9737c73a67539f34d85477\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54bdd96c5d-9r6ht" podUID="60bcca8f-0026-44f8-829f-16d201879e45" Dec 16 02:10:33.316923 containerd[1662]: time="2025-12-16T02:10:33.316867032Z" level=error msg="Failed to destroy network for sandbox \"3bdc7ef57e33cc75181fa8a6e5b1fc9683f5bfcfefac171be0b2e8e5c883cf68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.322719 containerd[1662]: time="2025-12-16T02:10:33.322562250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v65lh,Uid:8d5776d1-6402-4926-ae94-65987d42425c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bdc7ef57e33cc75181fa8a6e5b1fc9683f5bfcfefac171be0b2e8e5c883cf68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.323450 kubelet[2916]: E1216 02:10:33.323397 2916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bdc7ef57e33cc75181fa8a6e5b1fc9683f5bfcfefac171be0b2e8e5c883cf68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.323546 kubelet[2916]: E1216 02:10:33.323453 2916 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bdc7ef57e33cc75181fa8a6e5b1fc9683f5bfcfefac171be0b2e8e5c883cf68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v65lh" Dec 16 02:10:33.323546 kubelet[2916]: E1216 02:10:33.323471 2916 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bdc7ef57e33cc75181fa8a6e5b1fc9683f5bfcfefac171be0b2e8e5c883cf68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v65lh" Dec 16 02:10:33.323546 kubelet[2916]: E1216 02:10:33.323515 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-v65lh_kube-system(8d5776d1-6402-4926-ae94-65987d42425c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-v65lh_kube-system(8d5776d1-6402-4926-ae94-65987d42425c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bdc7ef57e33cc75181fa8a6e5b1fc9683f5bfcfefac171be0b2e8e5c883cf68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-v65lh" podUID="8d5776d1-6402-4926-ae94-65987d42425c" Dec 16 02:10:33.335159 containerd[1662]: time="2025-12-16T02:10:33.335109407Z" level=error msg="Failed to destroy network for sandbox \"1c7df20ebe28e0e5ad65c057f1cb1dd61dfba45013079735dbc4824cd6cb19fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.337771 containerd[1662]: time="2025-12-16T02:10:33.337709015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd794ff8d-86jwz,Uid:9fa7981b-06dd-4022-adb6-f277629bbdff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7df20ebe28e0e5ad65c057f1cb1dd61dfba45013079735dbc4824cd6cb19fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.338113 kubelet[2916]: E1216 02:10:33.338076 2916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7df20ebe28e0e5ad65c057f1cb1dd61dfba45013079735dbc4824cd6cb19fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.338186 kubelet[2916]: E1216 02:10:33.338131 2916 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7df20ebe28e0e5ad65c057f1cb1dd61dfba45013079735dbc4824cd6cb19fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" Dec 16 02:10:33.338186 kubelet[2916]: E1216 02:10:33.338148 2916 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7df20ebe28e0e5ad65c057f1cb1dd61dfba45013079735dbc4824cd6cb19fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" Dec 16 02:10:33.338248 kubelet[2916]: E1216 02:10:33.338191 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd794ff8d-86jwz_calico-apiserver(9fa7981b-06dd-4022-adb6-f277629bbdff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd794ff8d-86jwz_calico-apiserver(9fa7981b-06dd-4022-adb6-f277629bbdff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c7df20ebe28e0e5ad65c057f1cb1dd61dfba45013079735dbc4824cd6cb19fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:10:33.355755 containerd[1662]: time="2025-12-16T02:10:33.355650589Z" level=error msg="Failed to destroy network for sandbox \"f09d693f5cd87bd069acb90561857bf7b13537858c21bea0f7b38880c395e4d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.356245 containerd[1662]: time="2025-12-16T02:10:33.356209551Z" level=error msg="Failed to destroy network for sandbox \"ad83c3c99e9fad1b1097330ca34c83f805aaf81bbdf20f0ecbb68eb1c7927f9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.362671 containerd[1662]: time="2025-12-16T02:10:33.362505849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687df49b8c-8hkjd,Uid:8f7f3b6e-a739-4bf5-a67c-595cf0d67494,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09d693f5cd87bd069acb90561857bf7b13537858c21bea0f7b38880c395e4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.363877 kubelet[2916]: E1216 02:10:33.363030 2916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09d693f5cd87bd069acb90561857bf7b13537858c21bea0f7b38880c395e4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.364545 containerd[1662]: time="2025-12-16T02:10:33.363993254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd794ff8d-6hwpw,Uid:bf9a6289-a4bc-424f-93a6-cea4264ad5e4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad83c3c99e9fad1b1097330ca34c83f805aaf81bbdf20f0ecbb68eb1c7927f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.364663 kubelet[2916]: E1216 02:10:33.364067 2916 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09d693f5cd87bd069acb90561857bf7b13537858c21bea0f7b38880c395e4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" Dec 16 02:10:33.364663 kubelet[2916]: E1216 02:10:33.364165 2916 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f09d693f5cd87bd069acb90561857bf7b13537858c21bea0f7b38880c395e4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" Dec 16 02:10:33.364663 kubelet[2916]: E1216 02:10:33.364188 2916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad83c3c99e9fad1b1097330ca34c83f805aaf81bbdf20f0ecbb68eb1c7927f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.364800 kubelet[2916]: E1216 02:10:33.364230 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-687df49b8c-8hkjd_calico-system(8f7f3b6e-a739-4bf5-a67c-595cf0d67494)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-687df49b8c-8hkjd_calico-system(8f7f3b6e-a739-4bf5-a67c-595cf0d67494)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f09d693f5cd87bd069acb90561857bf7b13537858c21bea0f7b38880c395e4d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:10:33.364800 kubelet[2916]: E1216 02:10:33.364242 2916 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad83c3c99e9fad1b1097330ca34c83f805aaf81bbdf20f0ecbb68eb1c7927f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" Dec 16 02:10:33.364800 kubelet[2916]: E1216 02:10:33.364259 2916 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad83c3c99e9fad1b1097330ca34c83f805aaf81bbdf20f0ecbb68eb1c7927f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" Dec 16 02:10:33.364969 kubelet[2916]: E1216 02:10:33.364326 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd794ff8d-6hwpw_calico-apiserver(bf9a6289-a4bc-424f-93a6-cea4264ad5e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd794ff8d-6hwpw_calico-apiserver(bf9a6289-a4bc-424f-93a6-cea4264ad5e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad83c3c99e9fad1b1097330ca34c83f805aaf81bbdf20f0ecbb68eb1c7927f9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:10:33.367107 containerd[1662]: time="2025-12-16T02:10:33.367074543Z" level=error msg="Failed to destroy network for sandbox \"0b7d5e31a6c4e8d4cdfb399b11a7c5714627aed534c940d0ef51f4beb1b0bfb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.371209 containerd[1662]: time="2025-12-16T02:10:33.371173355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-b8vhz,Uid:621fd2d4-81d9-4021-8cd8-39b0addbde62,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7d5e31a6c4e8d4cdfb399b11a7c5714627aed534c940d0ef51f4beb1b0bfb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.371665 kubelet[2916]: E1216 02:10:33.371631 2916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7d5e31a6c4e8d4cdfb399b11a7c5714627aed534c940d0ef51f4beb1b0bfb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:10:33.371756 kubelet[2916]: E1216 02:10:33.371682 2916 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7d5e31a6c4e8d4cdfb399b11a7c5714627aed534c940d0ef51f4beb1b0bfb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-b8vhz" Dec 16 02:10:33.371756 kubelet[2916]: E1216 02:10:33.371701 2916 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7d5e31a6c4e8d4cdfb399b11a7c5714627aed534c940d0ef51f4beb1b0bfb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-b8vhz" Dec 16 02:10:33.371890 kubelet[2916]: E1216 02:10:33.371749 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-b8vhz_calico-system(621fd2d4-81d9-4021-8cd8-39b0addbde62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-b8vhz_calico-system(621fd2d4-81d9-4021-8cd8-39b0addbde62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b7d5e31a6c4e8d4cdfb399b11a7c5714627aed534c940d0ef51f4beb1b0bfb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:10:33.523184 containerd[1662]: time="2025-12-16T02:10:33.522914371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 02:10:37.386164 kubelet[2916]: I1216 02:10:37.386045 2916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 02:10:37.413000 audit[4011]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:37.417019 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 02:10:37.417111 kernel: audit: type=1325 audit(1765851037.413:569): table=filter:117 family=2 entries=21 op=nft_register_rule pid=4011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:37.417134 kernel: audit: type=1300 audit(1765851037.413:569): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe36eaf70 a2=0 a3=1 items=0 ppid=3054 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:37.413000 audit[4011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe36eaf70 a2=0 a3=1 items=0 ppid=3054 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:37.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:37.423048 kernel: audit: type=1327 audit(1765851037.413:569): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:37.423131 kernel: audit: type=1325 audit(1765851037.420:570): table=nat:118 family=2 entries=19 op=nft_register_chain pid=4011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:37.420000 audit[4011]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:37.425093 kernel: audit: type=1300 audit(1765851037.420:570): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe36eaf70 a2=0 a3=1 items=0 ppid=3054 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:37.420000 audit[4011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe36eaf70 a2=0 a3=1 items=0 ppid=3054 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:37.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:37.431173 kernel: audit: type=1327 audit(1765851037.420:570): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:40.413878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3537715176.mount: Deactivated successfully. Dec 16 02:10:40.439931 containerd[1662]: time="2025-12-16T02:10:40.439837054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:40.441281 containerd[1662]: time="2025-12-16T02:10:40.441201258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 02:10:40.442917 containerd[1662]: time="2025-12-16T02:10:40.442841623Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:40.445543 containerd[1662]: time="2025-12-16T02:10:40.445479030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:10:40.446091 containerd[1662]: time="2025-12-16T02:10:40.446025592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.922924061s" Dec 16 02:10:40.446091 containerd[1662]: time="2025-12-16T02:10:40.446055672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 02:10:40.459895 containerd[1662]: time="2025-12-16T02:10:40.459568513Z" level=info msg="CreateContainer within sandbox \"a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 02:10:40.470098 containerd[1662]: time="2025-12-16T02:10:40.470051664Z" level=info msg="Container fd3ebdf0f3abcc1e8e3b9b4c50577b3ac0ef71199d7c23de90216dd54f530935: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:40.486225 containerd[1662]: time="2025-12-16T02:10:40.486163353Z" level=info msg="CreateContainer within sandbox \"a30dd4300212019f68de8221195aa0df34d449e852d3c36379eb4c03a98faf17\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fd3ebdf0f3abcc1e8e3b9b4c50577b3ac0ef71199d7c23de90216dd54f530935\"" Dec 16 02:10:40.486767 containerd[1662]: time="2025-12-16T02:10:40.486712554Z" level=info msg="StartContainer for \"fd3ebdf0f3abcc1e8e3b9b4c50577b3ac0ef71199d7c23de90216dd54f530935\"" Dec 16 02:10:40.488541 containerd[1662]: time="2025-12-16T02:10:40.488486480Z" level=info msg="connecting to shim fd3ebdf0f3abcc1e8e3b9b4c50577b3ac0ef71199d7c23de90216dd54f530935" address="unix:///run/containerd/s/fc0363d8f6bc8b2ab671e1a3f507dfabdba26b650402689ed0358dd49aa59028" protocol=ttrpc version=3 Dec 16 02:10:40.515170 systemd[1]: Started cri-containerd-fd3ebdf0f3abcc1e8e3b9b4c50577b3ac0ef71199d7c23de90216dd54f530935.scope - libcontainer container fd3ebdf0f3abcc1e8e3b9b4c50577b3ac0ef71199d7c23de90216dd54f530935. Dec 16 02:10:40.566000 audit: BPF prog-id=172 op=LOAD Dec 16 02:10:40.566000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3424 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:40.572884 kernel: audit: type=1334 audit(1765851040.566:571): prog-id=172 op=LOAD Dec 16 02:10:40.572939 kernel: audit: type=1300 audit(1765851040.566:571): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3424 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:40.572971 kernel: audit: type=1327 audit(1765851040.566:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336562646630663361626363316538653362396234633530353737 Dec 16 02:10:40.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336562646630663361626363316538653362396234633530353737 Dec 16 02:10:40.567000 audit: BPF prog-id=173 op=LOAD Dec 16 02:10:40.577164 kernel: audit: type=1334 audit(1765851040.567:572): prog-id=173 op=LOAD Dec 16 02:10:40.567000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3424 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:40.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336562646630663361626363316538653362396234633530353737 Dec 16 02:10:40.568000 audit: BPF prog-id=173 op=UNLOAD Dec 16 02:10:40.568000 audit[4019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:40.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336562646630663361626363316538653362396234633530353737 Dec 16 02:10:40.568000 audit: BPF prog-id=172 op=UNLOAD Dec 16 02:10:40.568000 audit[4019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:40.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336562646630663361626363316538653362396234633530353737 Dec 16 02:10:40.568000 audit: BPF prog-id=174 op=LOAD Dec 16 02:10:40.568000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3424 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:40.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336562646630663361626363316538653362396234633530353737 Dec 16 02:10:40.600424 containerd[1662]: time="2025-12-16T02:10:40.600382015Z" level=info msg="StartContainer for \"fd3ebdf0f3abcc1e8e3b9b4c50577b3ac0ef71199d7c23de90216dd54f530935\" returns successfully" Dec 16 02:10:40.736121 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 02:10:40.736341 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 02:10:40.991216 kubelet[2916]: I1216 02:10:40.991171 2916 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60bcca8f-0026-44f8-829f-16d201879e45-whisker-backend-key-pair\") pod \"60bcca8f-0026-44f8-829f-16d201879e45\" (UID: \"60bcca8f-0026-44f8-829f-16d201879e45\") " Dec 16 02:10:40.991548 kubelet[2916]: I1216 02:10:40.991238 2916 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7lvq\" (UniqueName: \"kubernetes.io/projected/60bcca8f-0026-44f8-829f-16d201879e45-kube-api-access-c7lvq\") pod \"60bcca8f-0026-44f8-829f-16d201879e45\" (UID: \"60bcca8f-0026-44f8-829f-16d201879e45\") " Dec 16 02:10:40.991548 kubelet[2916]: I1216 02:10:40.991268 2916 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60bcca8f-0026-44f8-829f-16d201879e45-whisker-ca-bundle\") pod \"60bcca8f-0026-44f8-829f-16d201879e45\" (UID: \"60bcca8f-0026-44f8-829f-16d201879e45\") " Dec 16 02:10:40.992297 kubelet[2916]: I1216 02:10:40.992245 2916 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60bcca8f-0026-44f8-829f-16d201879e45-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "60bcca8f-0026-44f8-829f-16d201879e45" (UID: "60bcca8f-0026-44f8-829f-16d201879e45"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 02:10:40.994140 kubelet[2916]: I1216 02:10:40.994101 2916 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60bcca8f-0026-44f8-829f-16d201879e45-kube-api-access-c7lvq" (OuterVolumeSpecName: "kube-api-access-c7lvq") pod "60bcca8f-0026-44f8-829f-16d201879e45" (UID: "60bcca8f-0026-44f8-829f-16d201879e45"). InnerVolumeSpecName "kube-api-access-c7lvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 02:10:40.995121 kubelet[2916]: I1216 02:10:40.995075 2916 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60bcca8f-0026-44f8-829f-16d201879e45-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "60bcca8f-0026-44f8-829f-16d201879e45" (UID: "60bcca8f-0026-44f8-829f-16d201879e45"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 02:10:41.091649 kubelet[2916]: I1216 02:10:41.091581 2916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c7lvq\" (UniqueName: \"kubernetes.io/projected/60bcca8f-0026-44f8-829f-16d201879e45-kube-api-access-c7lvq\") on node \"ci-4547-0-0-9-b4376e68e3\" DevicePath \"\"" Dec 16 02:10:41.091649 kubelet[2916]: I1216 02:10:41.091613 2916 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60bcca8f-0026-44f8-829f-16d201879e45-whisker-ca-bundle\") on node \"ci-4547-0-0-9-b4376e68e3\" DevicePath \"\"" Dec 16 02:10:41.091649 kubelet[2916]: I1216 02:10:41.091624 2916 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60bcca8f-0026-44f8-829f-16d201879e45-whisker-backend-key-pair\") on node \"ci-4547-0-0-9-b4376e68e3\" DevicePath \"\"" Dec 16 02:10:41.415014 systemd[1]: var-lib-kubelet-pods-60bcca8f\x2d0026\x2d44f8\x2d829f\x2d16d201879e45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc7lvq.mount: Deactivated successfully. Dec 16 02:10:41.415116 systemd[1]: var-lib-kubelet-pods-60bcca8f\x2d0026\x2d44f8\x2d829f\x2d16d201879e45-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 02:10:41.553424 systemd[1]: Removed slice kubepods-besteffort-pod60bcca8f_0026_44f8_829f_16d201879e45.slice - libcontainer container kubepods-besteffort-pod60bcca8f_0026_44f8_829f_16d201879e45.slice. Dec 16 02:10:41.571647 kubelet[2916]: I1216 02:10:41.571557 2916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bwpl5" podStartSLOduration=2.009656089 podStartE2EDuration="23.571541251s" podCreationTimestamp="2025-12-16 02:10:18 +0000 UTC" firstStartedPulling="2025-12-16 02:10:18.885207553 +0000 UTC m=+20.553242695" lastFinishedPulling="2025-12-16 02:10:40.447092675 +0000 UTC m=+42.115127857" observedRunningTime="2025-12-16 02:10:41.57123793 +0000 UTC m=+43.239273072" watchObservedRunningTime="2025-12-16 02:10:41.571541251 +0000 UTC m=+43.239576393" Dec 16 02:10:41.623710 systemd[1]: Created slice kubepods-besteffort-pod1d880e3b_9ee9_4ee7_a578_c81cb365da0d.slice - libcontainer container kubepods-besteffort-pod1d880e3b_9ee9_4ee7_a578_c81cb365da0d.slice. Dec 16 02:10:41.695569 kubelet[2916]: I1216 02:10:41.695205 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d880e3b-9ee9-4ee7-a578-c81cb365da0d-whisker-backend-key-pair\") pod \"whisker-78cff6db84-5gv4l\" (UID: \"1d880e3b-9ee9-4ee7-a578-c81cb365da0d\") " pod="calico-system/whisker-78cff6db84-5gv4l" Dec 16 02:10:41.695569 kubelet[2916]: I1216 02:10:41.695302 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvhx\" (UniqueName: \"kubernetes.io/projected/1d880e3b-9ee9-4ee7-a578-c81cb365da0d-kube-api-access-tqvhx\") pod \"whisker-78cff6db84-5gv4l\" (UID: \"1d880e3b-9ee9-4ee7-a578-c81cb365da0d\") " pod="calico-system/whisker-78cff6db84-5gv4l" Dec 16 02:10:41.695569 kubelet[2916]: I1216 02:10:41.695457 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d880e3b-9ee9-4ee7-a578-c81cb365da0d-whisker-ca-bundle\") pod \"whisker-78cff6db84-5gv4l\" (UID: \"1d880e3b-9ee9-4ee7-a578-c81cb365da0d\") " pod="calico-system/whisker-78cff6db84-5gv4l" Dec 16 02:10:41.930408 containerd[1662]: time="2025-12-16T02:10:41.930352248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cff6db84-5gv4l,Uid:1d880e3b-9ee9-4ee7-a578-c81cb365da0d,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:42.107360 systemd-networkd[1582]: cali8f055f5b344: Link UP Dec 16 02:10:42.107550 systemd-networkd[1582]: cali8f055f5b344: Gained carrier Dec 16 02:10:42.138845 containerd[1662]: 2025-12-16 02:10:41.955 [INFO][4084] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:10:42.138845 containerd[1662]: 2025-12-16 02:10:41.974 [INFO][4084] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0 whisker-78cff6db84- calico-system 1d880e3b-9ee9-4ee7-a578-c81cb365da0d 926 0 2025-12-16 02:10:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78cff6db84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-0-0-9-b4376e68e3 whisker-78cff6db84-5gv4l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8f055f5b344 [] [] }} ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Namespace="calico-system" Pod="whisker-78cff6db84-5gv4l" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-" Dec 16 02:10:42.138845 containerd[1662]: 2025-12-16 02:10:41.974 [INFO][4084] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Namespace="calico-system" Pod="whisker-78cff6db84-5gv4l" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" Dec 16 02:10:42.138845 containerd[1662]: 2025-12-16 02:10:42.037 [INFO][4098] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" HandleID="k8s-pod-network.a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Workload="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" Dec 16 02:10:42.139105 containerd[1662]: 2025-12-16 02:10:42.038 [INFO][4098] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" HandleID="k8s-pod-network.a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Workload="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004c5420), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-9-b4376e68e3", "pod":"whisker-78cff6db84-5gv4l", "timestamp":"2025-12-16 02:10:42.037966571 +0000 UTC"}, Hostname:"ci-4547-0-0-9-b4376e68e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:42.139105 containerd[1662]: 2025-12-16 02:10:42.038 [INFO][4098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:42.139105 containerd[1662]: 2025-12-16 02:10:42.038 [INFO][4098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:42.139105 containerd[1662]: 2025-12-16 02:10:42.038 [INFO][4098] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-9-b4376e68e3' Dec 16 02:10:42.139105 containerd[1662]: 2025-12-16 02:10:42.049 [INFO][4098] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:42.139105 containerd[1662]: 2025-12-16 02:10:42.058 [INFO][4098] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:42.139105 containerd[1662]: 2025-12-16 02:10:42.063 [INFO][4098] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:42.139105 containerd[1662]: 2025-12-16 02:10:42.066 [INFO][4098] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:42.139105 containerd[1662]: 2025-12-16 02:10:42.068 [INFO][4098] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:42.139287 containerd[1662]: 2025-12-16 02:10:42.068 [INFO][4098] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:42.139287 containerd[1662]: 2025-12-16 02:10:42.070 [INFO][4098] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2 Dec 16 02:10:42.139287 containerd[1662]: 2025-12-16 02:10:42.075 [INFO][4098] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:42.139287 containerd[1662]: 2025-12-16 02:10:42.091 [INFO][4098] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.65/26] block=192.168.71.64/26 handle="k8s-pod-network.a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:42.139287 containerd[1662]: 2025-12-16 02:10:42.091 [INFO][4098] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.65/26] handle="k8s-pod-network.a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:42.139287 containerd[1662]: 2025-12-16 02:10:42.091 [INFO][4098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:42.139287 containerd[1662]: 2025-12-16 02:10:42.091 [INFO][4098] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.65/26] IPv6=[] ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" HandleID="k8s-pod-network.a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Workload="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" Dec 16 02:10:42.139413 containerd[1662]: 2025-12-16 02:10:42.096 [INFO][4084] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Namespace="calico-system" Pod="whisker-78cff6db84-5gv4l" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0", GenerateName:"whisker-78cff6db84-", Namespace:"calico-system", SelfLink:"", UID:"1d880e3b-9ee9-4ee7-a578-c81cb365da0d", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cff6db84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"", Pod:"whisker-78cff6db84-5gv4l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f055f5b344", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:42.139413 containerd[1662]: 2025-12-16 02:10:42.096 [INFO][4084] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.65/32] ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Namespace="calico-system" Pod="whisker-78cff6db84-5gv4l" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" Dec 16 02:10:42.139483 containerd[1662]: 2025-12-16 02:10:42.096 [INFO][4084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f055f5b344 ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Namespace="calico-system" Pod="whisker-78cff6db84-5gv4l" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" Dec 16 02:10:42.139483 containerd[1662]: 2025-12-16 02:10:42.109 [INFO][4084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Namespace="calico-system" Pod="whisker-78cff6db84-5gv4l" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" Dec 16 02:10:42.139524 containerd[1662]: 2025-12-16 02:10:42.110 [INFO][4084] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Namespace="calico-system" Pod="whisker-78cff6db84-5gv4l" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0", GenerateName:"whisker-78cff6db84-", Namespace:"calico-system", SelfLink:"", UID:"1d880e3b-9ee9-4ee7-a578-c81cb365da0d", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cff6db84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2", Pod:"whisker-78cff6db84-5gv4l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f055f5b344", MAC:"f2:36:32:51:11:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:42.139569 containerd[1662]: 2025-12-16 02:10:42.134 [INFO][4084] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" Namespace="calico-system" Pod="whisker-78cff6db84-5gv4l" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-whisker--78cff6db84--5gv4l-eth0" Dec 16 02:10:42.173162 containerd[1662]: time="2025-12-16T02:10:42.173106296Z" level=info msg="connecting to shim a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2" address="unix:///run/containerd/s/765823bea35d06a8a27b20e227dc6466036099cc8821e6f2b3c72457643fe572" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:42.206171 systemd[1]: Started cri-containerd-a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2.scope - libcontainer container a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2. Dec 16 02:10:42.225000 audit: BPF prog-id=175 op=LOAD Dec 16 02:10:42.226000 audit: BPF prog-id=176 op=LOAD Dec 16 02:10:42.226000 audit[4232]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4217 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323830323231366435313230663430363365663836383861343732 Dec 16 02:10:42.227000 audit: BPF prog-id=176 op=UNLOAD Dec 16 02:10:42.227000 audit[4232]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4217 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323830323231366435313230663430363365663836383861343732 Dec 16 02:10:42.227000 audit: BPF prog-id=177 op=LOAD Dec 16 02:10:42.227000 audit[4232]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4217 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323830323231366435313230663430363365663836383861343732 Dec 16 02:10:42.227000 audit: BPF prog-id=178 op=LOAD Dec 16 02:10:42.227000 audit[4232]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4217 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323830323231366435313230663430363365663836383861343732 Dec 16 02:10:42.227000 audit: BPF prog-id=178 op=UNLOAD Dec 16 02:10:42.227000 audit[4232]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4217 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323830323231366435313230663430363365663836383861343732 Dec 16 02:10:42.227000 audit: BPF prog-id=177 op=UNLOAD Dec 16 02:10:42.227000 audit[4232]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4217 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323830323231366435313230663430363365663836383861343732 Dec 16 02:10:42.227000 audit: BPF prog-id=179 op=LOAD Dec 16 02:10:42.227000 audit[4232]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4217 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323830323231366435313230663430363365663836383861343732 Dec 16 02:10:42.268303 containerd[1662]: time="2025-12-16T02:10:42.268184542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cff6db84-5gv4l,Uid:1d880e3b-9ee9-4ee7-a578-c81cb365da0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a32802216d5120f4063ef8688a4724b6f1ed69ac09f64d5e0024d30b072d57d2\"" Dec 16 02:10:42.270267 containerd[1662]: time="2025-12-16T02:10:42.270242628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:10:42.302000 audit: BPF prog-id=180 op=LOAD Dec 16 02:10:42.302000 audit[4284]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8fb9608 a2=98 a3=fffff8fb95f8 items=0 ppid=4121 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:42.302000 audit: BPF prog-id=180 op=UNLOAD Dec 16 02:10:42.302000 audit[4284]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff8fb95d8 a3=0 items=0 ppid=4121 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:42.302000 audit: BPF prog-id=181 op=LOAD Dec 16 02:10:42.302000 audit[4284]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8fb94b8 a2=74 a3=95 items=0 ppid=4121 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:42.302000 audit: BPF prog-id=181 op=UNLOAD Dec 16 02:10:42.302000 audit[4284]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4121 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:42.302000 audit: BPF prog-id=182 op=LOAD Dec 16 02:10:42.302000 audit[4284]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8fb94e8 a2=40 a3=fffff8fb9518 items=0 ppid=4121 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:42.302000 audit: BPF prog-id=182 op=UNLOAD Dec 16 02:10:42.302000 audit[4284]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff8fb9518 items=0 ppid=4121 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:42.304000 audit: BPF prog-id=183 op=LOAD Dec 16 02:10:42.304000 audit[4285]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9b5e7b8 a2=98 a3=ffffe9b5e7a8 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.304000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.304000 audit: BPF prog-id=183 op=UNLOAD Dec 16 02:10:42.304000 audit[4285]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe9b5e788 a3=0 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.304000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.304000 audit: BPF prog-id=184 op=LOAD Dec 16 02:10:42.304000 audit[4285]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe9b5e448 a2=74 a3=95 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.304000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.304000 audit: BPF prog-id=184 op=UNLOAD Dec 16 02:10:42.304000 audit[4285]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.304000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.304000 audit: BPF prog-id=185 op=LOAD Dec 16 02:10:42.304000 audit[4285]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe9b5e4a8 a2=94 a3=2 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.304000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.304000 audit: BPF prog-id=185 op=UNLOAD Dec 16 02:10:42.304000 audit[4285]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.304000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.403000 audit: BPF prog-id=186 op=LOAD Dec 16 02:10:42.403000 audit[4285]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe9b5e468 a2=40 a3=ffffe9b5e498 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.403000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.403000 audit: BPF prog-id=186 op=UNLOAD Dec 16 02:10:42.403000 audit[4285]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe9b5e498 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.403000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.413000 audit: BPF prog-id=187 op=LOAD Dec 16 02:10:42.413000 audit[4285]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe9b5e478 a2=94 a3=4 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.413000 audit: BPF prog-id=187 op=UNLOAD Dec 16 02:10:42.413000 audit[4285]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.413000 audit: BPF prog-id=188 op=LOAD Dec 16 02:10:42.413000 audit[4285]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe9b5e2b8 a2=94 a3=5 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.413000 audit: BPF prog-id=188 op=UNLOAD Dec 16 02:10:42.413000 audit[4285]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.413000 audit: BPF prog-id=189 op=LOAD Dec 16 02:10:42.413000 audit[4285]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe9b5e4e8 a2=94 a3=6 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.413000 audit: BPF prog-id=189 op=UNLOAD Dec 16 02:10:42.413000 audit[4285]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.413000 audit: BPF prog-id=190 op=LOAD Dec 16 02:10:42.416241 kernel: kauditd_printk_skb: 93 callbacks suppressed Dec 16 02:10:42.416341 kernel: audit: type=1334 audit(1765851042.413:604): prog-id=190 op=LOAD Dec 16 02:10:42.416365 kernel: audit: type=1300 audit(1765851042.413:604): arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe9b5dcb8 a2=94 a3=83 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.413000 audit[4285]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe9b5dcb8 a2=94 a3=83 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.420885 kernel: audit: type=1327 audit(1765851042.413:604): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.414000 audit: BPF prog-id=191 op=LOAD Dec 16 02:10:42.414000 audit[4285]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe9b5da78 a2=94 a3=2 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.424756 kubelet[2916]: I1216 02:10:42.424692 2916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60bcca8f-0026-44f8-829f-16d201879e45" path="/var/lib/kubelet/pods/60bcca8f-0026-44f8-829f-16d201879e45/volumes" Dec 16 02:10:42.425899 kernel: audit: type=1334 audit(1765851042.414:605): prog-id=191 op=LOAD Dec 16 02:10:42.426603 kernel: audit: type=1300 audit(1765851042.414:605): arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe9b5da78 a2=94 a3=2 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.426623 kernel: audit: type=1327 audit(1765851042.414:605): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.418000 audit: BPF prog-id=191 op=UNLOAD Dec 16 02:10:42.428037 kernel: audit: type=1334 audit(1765851042.418:606): prog-id=191 op=UNLOAD Dec 16 02:10:42.428075 kernel: audit: type=1300 audit(1765851042.418:606): arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.418000 audit[4285]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.418000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.432345 kernel: audit: type=1327 audit(1765851042.418:606): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.432400 kernel: audit: type=1334 audit(1765851042.419:607): prog-id=190 op=UNLOAD Dec 16 02:10:42.419000 audit: BPF prog-id=190 op=UNLOAD Dec 16 02:10:42.419000 audit[4285]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3d92f620 a3=3d922b00 items=0 ppid=4121 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.419000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:42.431000 audit: BPF prog-id=192 op=LOAD Dec 16 02:10:42.431000 audit[4288]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4f47b68 a2=98 a3=fffff4f47b58 items=0 ppid=4121 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.431000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:42.431000 audit: BPF prog-id=192 op=UNLOAD Dec 16 02:10:42.431000 audit[4288]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff4f47b38 a3=0 items=0 ppid=4121 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.431000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:42.431000 audit: BPF prog-id=193 op=LOAD Dec 16 02:10:42.431000 audit[4288]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4f47a18 a2=74 a3=95 items=0 ppid=4121 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.431000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:42.432000 audit: BPF prog-id=193 op=UNLOAD Dec 16 02:10:42.432000 audit[4288]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4121 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.432000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:42.432000 audit: BPF prog-id=194 op=LOAD Dec 16 02:10:42.432000 audit[4288]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4f47a48 a2=40 a3=fffff4f47a78 items=0 ppid=4121 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.432000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:42.432000 audit: BPF prog-id=194 op=UNLOAD Dec 16 02:10:42.432000 audit[4288]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff4f47a78 items=0 ppid=4121 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.432000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:42.496459 systemd-networkd[1582]: vxlan.calico: Link UP Dec 16 02:10:42.496469 systemd-networkd[1582]: vxlan.calico: Gained carrier Dec 16 02:10:42.512000 audit: BPF prog-id=195 op=LOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7684228 a2=98 a3=ffffd7684218 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.512000 audit: BPF prog-id=195 op=UNLOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd76841f8 a3=0 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.512000 audit: BPF prog-id=196 op=LOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7683f08 a2=74 a3=95 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.512000 audit: BPF prog-id=196 op=UNLOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.512000 audit: BPF prog-id=197 op=LOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7683f68 a2=94 a3=2 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.512000 audit: BPF prog-id=197 op=UNLOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.512000 audit: BPF prog-id=198 op=LOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd7683de8 a2=40 a3=ffffd7683e18 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.512000 audit: BPF prog-id=198 op=UNLOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd7683e18 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.512000 audit: BPF prog-id=199 op=LOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd7683f38 a2=94 a3=b7 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.512000 audit: BPF prog-id=199 op=UNLOAD Dec 16 02:10:42.512000 audit[4315]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.513000 audit: BPF prog-id=200 op=LOAD Dec 16 02:10:42.513000 audit[4315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd76835e8 a2=94 a3=2 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.513000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.513000 audit: BPF prog-id=200 op=UNLOAD Dec 16 02:10:42.513000 audit[4315]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.513000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.513000 audit: BPF prog-id=201 op=LOAD Dec 16 02:10:42.513000 audit[4315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd7683778 a2=94 a3=30 items=0 ppid=4121 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.513000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:42.515000 audit: BPF prog-id=202 op=LOAD Dec 16 02:10:42.515000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffc163e18 a2=98 a3=fffffc163e08 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.515000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.515000 audit: BPF prog-id=202 op=UNLOAD Dec 16 02:10:42.515000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffc163de8 a3=0 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.515000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.515000 audit: BPF prog-id=203 op=LOAD Dec 16 02:10:42.515000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffc163aa8 a2=74 a3=95 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.515000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.515000 audit: BPF prog-id=203 op=UNLOAD Dec 16 02:10:42.515000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.515000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.515000 audit: BPF prog-id=204 op=LOAD Dec 16 02:10:42.515000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffc163b08 a2=94 a3=2 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.515000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.515000 audit: BPF prog-id=204 op=UNLOAD Dec 16 02:10:42.515000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.515000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.623301 containerd[1662]: time="2025-12-16T02:10:42.623244888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:42.624873 containerd[1662]: time="2025-12-16T02:10:42.624794572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:10:42.624954 containerd[1662]: time="2025-12-16T02:10:42.624855492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:42.625087 kubelet[2916]: E1216 02:10:42.625053 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:42.625158 kubelet[2916]: E1216 02:10:42.625100 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:42.625218 kubelet[2916]: E1216 02:10:42.625193 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-78cff6db84-5gv4l_calico-system(1d880e3b-9ee9-4ee7-a578-c81cb365da0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:42.627112 containerd[1662]: time="2025-12-16T02:10:42.627078459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:10:42.626000 audit: BPF prog-id=205 op=LOAD Dec 16 02:10:42.626000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffc163ac8 a2=40 a3=fffffc163af8 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.626000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.626000 audit: BPF prog-id=205 op=UNLOAD Dec 16 02:10:42.626000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffffc163af8 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.626000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.638000 audit: BPF prog-id=206 op=LOAD Dec 16 02:10:42.638000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffc163ad8 a2=94 a3=4 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.638000 audit: BPF prog-id=206 op=UNLOAD Dec 16 02:10:42.638000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.638000 audit: BPF prog-id=207 op=LOAD Dec 16 02:10:42.638000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc163918 a2=94 a3=5 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.639000 audit: BPF prog-id=207 op=UNLOAD Dec 16 02:10:42.639000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.639000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.639000 audit: BPF prog-id=208 op=LOAD Dec 16 02:10:42.639000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffc163b48 a2=94 a3=6 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.639000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.639000 audit: BPF prog-id=208 op=UNLOAD Dec 16 02:10:42.639000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.639000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.639000 audit: BPF prog-id=209 op=LOAD Dec 16 02:10:42.639000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffc163318 a2=94 a3=83 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.639000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.639000 audit: BPF prog-id=210 op=LOAD Dec 16 02:10:42.639000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffffc1630d8 a2=94 a3=2 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.639000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.639000 audit: BPF prog-id=210 op=UNLOAD Dec 16 02:10:42.639000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.639000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.640000 audit: BPF prog-id=209 op=UNLOAD Dec 16 02:10:42.640000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=19d91620 a3=19d84b00 items=0 ppid=4121 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.640000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:42.652000 audit: BPF prog-id=201 op=UNLOAD Dec 16 02:10:42.652000 audit[4121]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40005c6700 a2=0 a3=0 items=0 ppid=4108 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.652000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 02:10:42.704000 audit[4345]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4345 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:42.704000 audit[4345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffcfc12260 a2=0 a3=ffffb9b10fa8 items=0 ppid=4121 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.704000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:42.712000 audit[4343]: NETFILTER_CFG table=raw:120 family=2 entries=21 op=nft_register_chain pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:42.712000 audit[4343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd55ec100 a2=0 a3=ffff91c85fa8 items=0 ppid=4121 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.712000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:42.717000 audit[4351]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4351 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:42.717000 audit[4351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc08fade0 a2=0 a3=ffff89976fa8 items=0 ppid=4121 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.717000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:42.719000 audit[4346]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4346 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:42.719000 audit[4346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe106bb80 a2=0 a3=ffff89c05fa8 items=0 ppid=4121 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:42.719000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:42.955072 containerd[1662]: time="2025-12-16T02:10:42.954966643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:42.956674 containerd[1662]: time="2025-12-16T02:10:42.956516728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:10:42.956674 containerd[1662]: time="2025-12-16T02:10:42.956607688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:42.956971 kubelet[2916]: E1216 02:10:42.956794 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:42.956971 kubelet[2916]: E1216 02:10:42.956841 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:42.956971 kubelet[2916]: E1216 02:10:42.956937 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-78cff6db84-5gv4l_calico-system(1d880e3b-9ee9-4ee7-a578-c81cb365da0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:42.957307 kubelet[2916]: E1216 02:10:42.957143 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:10:43.295065 systemd-networkd[1582]: cali8f055f5b344: Gained IPv6LL Dec 16 02:10:43.453670 kubelet[2916]: I1216 02:10:43.453516 2916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 02:10:43.555619 kubelet[2916]: E1216 02:10:43.555292 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:10:43.578000 audit[4410]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:43.578000 audit[4410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdaa70020 a2=0 a3=1 items=0 ppid=3054 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:43.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:43.588000 audit[4410]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:43.588000 audit[4410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdaa70020 a2=0 a3=1 items=0 ppid=3054 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:43.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:44.063150 systemd-networkd[1582]: vxlan.calico: Gained IPv6LL Dec 16 02:10:44.427367 containerd[1662]: time="2025-12-16T02:10:44.427268143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687df49b8c-8hkjd,Uid:8f7f3b6e-a739-4bf5-a67c-595cf0d67494,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:44.430552 containerd[1662]: time="2025-12-16T02:10:44.430436672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-55cxh,Uid:4a2c8657-1432-4d11-acb7-c419b126c948,Namespace:kube-system,Attempt:0,}" Dec 16 02:10:44.552529 systemd-networkd[1582]: cali35c0f41f241: Link UP Dec 16 02:10:44.553386 systemd-networkd[1582]: cali35c0f41f241: Gained carrier Dec 16 02:10:44.567963 containerd[1662]: 2025-12-16 02:10:44.479 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0 calico-kube-controllers-687df49b8c- calico-system 8f7f3b6e-a739-4bf5-a67c-595cf0d67494 853 0 2025-12-16 02:10:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:687df49b8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-0-0-9-b4376e68e3 calico-kube-controllers-687df49b8c-8hkjd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali35c0f41f241 [] [] }} ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Namespace="calico-system" Pod="calico-kube-controllers-687df49b8c-8hkjd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-" Dec 16 02:10:44.567963 containerd[1662]: 2025-12-16 02:10:44.479 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Namespace="calico-system" Pod="calico-kube-controllers-687df49b8c-8hkjd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" Dec 16 02:10:44.567963 containerd[1662]: 2025-12-16 02:10:44.511 [INFO][4443] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" HandleID="k8s-pod-network.a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Workload="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" Dec 16 02:10:44.568176 containerd[1662]: 2025-12-16 02:10:44.511 [INFO][4443] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" HandleID="k8s-pod-network.a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Workload="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059a7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-9-b4376e68e3", "pod":"calico-kube-controllers-687df49b8c-8hkjd", "timestamp":"2025-12-16 02:10:44.511279315 +0000 UTC"}, Hostname:"ci-4547-0-0-9-b4376e68e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:44.568176 containerd[1662]: 2025-12-16 02:10:44.511 [INFO][4443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:44.568176 containerd[1662]: 2025-12-16 02:10:44.511 [INFO][4443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:44.568176 containerd[1662]: 2025-12-16 02:10:44.511 [INFO][4443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-9-b4376e68e3' Dec 16 02:10:44.568176 containerd[1662]: 2025-12-16 02:10:44.521 [INFO][4443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.568176 containerd[1662]: 2025-12-16 02:10:44.526 [INFO][4443] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.568176 containerd[1662]: 2025-12-16 02:10:44.531 [INFO][4443] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.568176 containerd[1662]: 2025-12-16 02:10:44.533 [INFO][4443] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.568176 containerd[1662]: 2025-12-16 02:10:44.535 [INFO][4443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.568365 containerd[1662]: 2025-12-16 02:10:44.535 [INFO][4443] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.568365 containerd[1662]: 2025-12-16 02:10:44.537 [INFO][4443] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a Dec 16 02:10:44.568365 containerd[1662]: 2025-12-16 02:10:44.540 [INFO][4443] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.568365 containerd[1662]: 2025-12-16 02:10:44.546 [INFO][4443] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.66/26] block=192.168.71.64/26 handle="k8s-pod-network.a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.568365 containerd[1662]: 2025-12-16 02:10:44.547 [INFO][4443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.66/26] handle="k8s-pod-network.a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.568365 containerd[1662]: 2025-12-16 02:10:44.547 [INFO][4443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:44.568365 containerd[1662]: 2025-12-16 02:10:44.547 [INFO][4443] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.66/26] IPv6=[] ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" HandleID="k8s-pod-network.a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Workload="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" Dec 16 02:10:44.568493 containerd[1662]: 2025-12-16 02:10:44.550 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Namespace="calico-system" Pod="calico-kube-controllers-687df49b8c-8hkjd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0", GenerateName:"calico-kube-controllers-687df49b8c-", Namespace:"calico-system", SelfLink:"", UID:"8f7f3b6e-a739-4bf5-a67c-595cf0d67494", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"687df49b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"", Pod:"calico-kube-controllers-687df49b8c-8hkjd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali35c0f41f241", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:44.568538 containerd[1662]: 2025-12-16 02:10:44.550 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.66/32] ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Namespace="calico-system" Pod="calico-kube-controllers-687df49b8c-8hkjd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" Dec 16 02:10:44.568538 containerd[1662]: 2025-12-16 02:10:44.550 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35c0f41f241 ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Namespace="calico-system" Pod="calico-kube-controllers-687df49b8c-8hkjd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" Dec 16 02:10:44.568538 containerd[1662]: 2025-12-16 02:10:44.553 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Namespace="calico-system" Pod="calico-kube-controllers-687df49b8c-8hkjd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" Dec 16 02:10:44.568601 containerd[1662]: 2025-12-16 02:10:44.555 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Namespace="calico-system" Pod="calico-kube-controllers-687df49b8c-8hkjd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0", GenerateName:"calico-kube-controllers-687df49b8c-", Namespace:"calico-system", SelfLink:"", UID:"8f7f3b6e-a739-4bf5-a67c-595cf0d67494", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"687df49b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a", Pod:"calico-kube-controllers-687df49b8c-8hkjd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali35c0f41f241", MAC:"ea:e4:20:b4:8c:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:44.568664 containerd[1662]: 2025-12-16 02:10:44.565 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" Namespace="calico-system" Pod="calico-kube-controllers-687df49b8c-8hkjd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--kube--controllers--687df49b8c--8hkjd-eth0" Dec 16 02:10:44.579000 audit[4468]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4468 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:44.579000 audit[4468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffcaca3ff0 a2=0 a3=ffff89409fa8 items=0 ppid=4121 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.579000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:44.591647 containerd[1662]: time="2025-12-16T02:10:44.591518996Z" level=info msg="connecting to shim a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a" address="unix:///run/containerd/s/b02fd8b4dd1d00a766af1cf32cfbeda425ecaaf9d1a8a31fe4a6047f2a85afe5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:44.618104 systemd[1]: Started cri-containerd-a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a.scope - libcontainer container a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a. Dec 16 02:10:44.628000 audit: BPF prog-id=211 op=LOAD Dec 16 02:10:44.629000 audit: BPF prog-id=212 op=LOAD Dec 16 02:10:44.629000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4478 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138363730316361363062613233323631353335313138363937326338 Dec 16 02:10:44.629000 audit: BPF prog-id=212 op=UNLOAD Dec 16 02:10:44.629000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138363730316361363062613233323631353335313138363937326338 Dec 16 02:10:44.630000 audit: BPF prog-id=213 op=LOAD Dec 16 02:10:44.630000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4478 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138363730316361363062613233323631353335313138363937326338 Dec 16 02:10:44.630000 audit: BPF prog-id=214 op=LOAD Dec 16 02:10:44.630000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4478 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138363730316361363062613233323631353335313138363937326338 Dec 16 02:10:44.630000 audit: BPF prog-id=214 op=UNLOAD Dec 16 02:10:44.630000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138363730316361363062613233323631353335313138363937326338 Dec 16 02:10:44.630000 audit: BPF prog-id=213 op=UNLOAD Dec 16 02:10:44.630000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138363730316361363062613233323631353335313138363937326338 Dec 16 02:10:44.630000 audit: BPF prog-id=215 op=LOAD Dec 16 02:10:44.630000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4478 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138363730316361363062613233323631353335313138363937326338 Dec 16 02:10:44.658767 containerd[1662]: time="2025-12-16T02:10:44.658726437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687df49b8c-8hkjd,Uid:8f7f3b6e-a739-4bf5-a67c-595cf0d67494,Namespace:calico-system,Attempt:0,} returns sandbox id \"a86701ca60ba232615351186972c822c02c28fa3f7e04e2bde4765a20f4afb9a\"" Dec 16 02:10:44.660365 containerd[1662]: time="2025-12-16T02:10:44.660333442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:10:44.666171 systemd-networkd[1582]: calie331ab6e8e9: Link UP Dec 16 02:10:44.666893 systemd-networkd[1582]: calie331ab6e8e9: Gained carrier Dec 16 02:10:44.682952 containerd[1662]: 2025-12-16 02:10:44.485 [INFO][4425] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0 coredns-66bc5c9577- kube-system 4a2c8657-1432-4d11-acb7-c419b126c948 849 0 2025-12-16 02:10:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-9-b4376e68e3 coredns-66bc5c9577-55cxh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie331ab6e8e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Namespace="kube-system" Pod="coredns-66bc5c9577-55cxh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-" Dec 16 02:10:44.682952 containerd[1662]: 2025-12-16 02:10:44.485 [INFO][4425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Namespace="kube-system" Pod="coredns-66bc5c9577-55cxh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" Dec 16 02:10:44.682952 containerd[1662]: 2025-12-16 02:10:44.514 [INFO][4450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" HandleID="k8s-pod-network.290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Workload="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" Dec 16 02:10:44.683140 containerd[1662]: 2025-12-16 02:10:44.514 [INFO][4450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" HandleID="k8s-pod-network.290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Workload="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000116d10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-9-b4376e68e3", "pod":"coredns-66bc5c9577-55cxh", "timestamp":"2025-12-16 02:10:44.514666485 +0000 UTC"}, Hostname:"ci-4547-0-0-9-b4376e68e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:44.683140 containerd[1662]: 2025-12-16 02:10:44.514 [INFO][4450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:44.683140 containerd[1662]: 2025-12-16 02:10:44.547 [INFO][4450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:44.683140 containerd[1662]: 2025-12-16 02:10:44.547 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-9-b4376e68e3' Dec 16 02:10:44.683140 containerd[1662]: 2025-12-16 02:10:44.622 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.683140 containerd[1662]: 2025-12-16 02:10:44.630 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.683140 containerd[1662]: 2025-12-16 02:10:44.635 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.683140 containerd[1662]: 2025-12-16 02:10:44.638 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.683140 containerd[1662]: 2025-12-16 02:10:44.641 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.684430 containerd[1662]: 2025-12-16 02:10:44.641 [INFO][4450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.684430 containerd[1662]: 2025-12-16 02:10:44.642 [INFO][4450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47 Dec 16 02:10:44.684430 containerd[1662]: 2025-12-16 02:10:44.648 [INFO][4450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.684430 containerd[1662]: 2025-12-16 02:10:44.656 [INFO][4450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.67/26] block=192.168.71.64/26 handle="k8s-pod-network.290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.684430 containerd[1662]: 2025-12-16 02:10:44.656 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.67/26] handle="k8s-pod-network.290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:44.684430 containerd[1662]: 2025-12-16 02:10:44.656 [INFO][4450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:44.684430 containerd[1662]: 2025-12-16 02:10:44.656 [INFO][4450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.67/26] IPv6=[] ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" HandleID="k8s-pod-network.290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Workload="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" Dec 16 02:10:44.684576 containerd[1662]: 2025-12-16 02:10:44.660 [INFO][4425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Namespace="kube-system" Pod="coredns-66bc5c9577-55cxh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4a2c8657-1432-4d11-acb7-c419b126c948", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"", Pod:"coredns-66bc5c9577-55cxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie331ab6e8e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:44.684576 containerd[1662]: 2025-12-16 02:10:44.661 [INFO][4425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.67/32] ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Namespace="kube-system" Pod="coredns-66bc5c9577-55cxh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" Dec 16 02:10:44.684576 containerd[1662]: 2025-12-16 02:10:44.661 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie331ab6e8e9 ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Namespace="kube-system" Pod="coredns-66bc5c9577-55cxh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" Dec 16 02:10:44.684576 containerd[1662]: 2025-12-16 02:10:44.667 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Namespace="kube-system" Pod="coredns-66bc5c9577-55cxh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" Dec 16 02:10:44.684576 containerd[1662]: 2025-12-16 02:10:44.667 [INFO][4425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Namespace="kube-system" Pod="coredns-66bc5c9577-55cxh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4a2c8657-1432-4d11-acb7-c419b126c948", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47", Pod:"coredns-66bc5c9577-55cxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie331ab6e8e9", MAC:"1e:71:4d:42:6e:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:44.684786 containerd[1662]: 2025-12-16 02:10:44.679 [INFO][4425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" Namespace="kube-system" Pod="coredns-66bc5c9577-55cxh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--55cxh-eth0" Dec 16 02:10:44.693000 audit[4524]: NETFILTER_CFG table=filter:126 family=2 entries=46 op=nft_register_chain pid=4524 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:44.693000 audit[4524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=fffff7e064b0 a2=0 a3=ffffb2977fa8 items=0 ppid=4121 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.693000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:44.705319 containerd[1662]: time="2025-12-16T02:10:44.705275777Z" level=info msg="connecting to shim 290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47" address="unix:///run/containerd/s/69ceec6d74d26434f7754af7092fca6006c8c3dabf9d5b387b170c27523443e6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:44.728327 systemd[1]: Started cri-containerd-290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47.scope - libcontainer container 290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47. Dec 16 02:10:44.738000 audit: BPF prog-id=216 op=LOAD Dec 16 02:10:44.738000 audit: BPF prog-id=217 op=LOAD Dec 16 02:10:44.738000 audit[4543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4533 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239306566616232333830336566326338643338613439396265643233 Dec 16 02:10:44.738000 audit: BPF prog-id=217 op=UNLOAD Dec 16 02:10:44.738000 audit[4543]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4533 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239306566616232333830336566326338643338613439396265643233 Dec 16 02:10:44.738000 audit: BPF prog-id=218 op=LOAD Dec 16 02:10:44.738000 audit[4543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4533 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239306566616232333830336566326338643338613439396265643233 Dec 16 02:10:44.738000 audit: BPF prog-id=219 op=LOAD Dec 16 02:10:44.738000 audit[4543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4533 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239306566616232333830336566326338643338613439396265643233 Dec 16 02:10:44.738000 audit: BPF prog-id=219 op=UNLOAD Dec 16 02:10:44.738000 audit[4543]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4533 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239306566616232333830336566326338643338613439396265643233 Dec 16 02:10:44.738000 audit: BPF prog-id=218 op=UNLOAD Dec 16 02:10:44.738000 audit[4543]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4533 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239306566616232333830336566326338643338613439396265643233 Dec 16 02:10:44.738000 audit: BPF prog-id=220 op=LOAD Dec 16 02:10:44.738000 audit[4543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4533 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239306566616232333830336566326338643338613439396265643233 Dec 16 02:10:44.761932 containerd[1662]: time="2025-12-16T02:10:44.761886227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-55cxh,Uid:4a2c8657-1432-4d11-acb7-c419b126c948,Namespace:kube-system,Attempt:0,} returns sandbox id \"290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47\"" Dec 16 02:10:44.768209 containerd[1662]: time="2025-12-16T02:10:44.768161846Z" level=info msg="CreateContainer within sandbox \"290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:10:44.778601 containerd[1662]: time="2025-12-16T02:10:44.778523437Z" level=info msg="Container c98e2a7d70bc3ac5bcd61579dfa5cfe204e0548f6e5e2461c2efc28b926f55e6: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:44.786555 containerd[1662]: time="2025-12-16T02:10:44.786506381Z" level=info msg="CreateContainer within sandbox \"290efab23803ef2c8d38a499bed23ffdc2665ad35d9162299527a7c256d04a47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c98e2a7d70bc3ac5bcd61579dfa5cfe204e0548f6e5e2461c2efc28b926f55e6\"" Dec 16 02:10:44.787144 containerd[1662]: time="2025-12-16T02:10:44.787107463Z" level=info msg="StartContainer for \"c98e2a7d70bc3ac5bcd61579dfa5cfe204e0548f6e5e2461c2efc28b926f55e6\"" Dec 16 02:10:44.788251 containerd[1662]: time="2025-12-16T02:10:44.788220506Z" level=info msg="connecting to shim c98e2a7d70bc3ac5bcd61579dfa5cfe204e0548f6e5e2461c2efc28b926f55e6" address="unix:///run/containerd/s/69ceec6d74d26434f7754af7092fca6006c8c3dabf9d5b387b170c27523443e6" protocol=ttrpc version=3 Dec 16 02:10:44.810182 systemd[1]: Started cri-containerd-c98e2a7d70bc3ac5bcd61579dfa5cfe204e0548f6e5e2461c2efc28b926f55e6.scope - libcontainer container c98e2a7d70bc3ac5bcd61579dfa5cfe204e0548f6e5e2461c2efc28b926f55e6. Dec 16 02:10:44.824000 audit: BPF prog-id=221 op=LOAD Dec 16 02:10:44.824000 audit: BPF prog-id=222 op=LOAD Dec 16 02:10:44.824000 audit[4569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4533 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339386532613764373062633361633562636436313537396466613563 Dec 16 02:10:44.824000 audit: BPF prog-id=222 op=UNLOAD Dec 16 02:10:44.824000 audit[4569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4533 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339386532613764373062633361633562636436313537396466613563 Dec 16 02:10:44.824000 audit: BPF prog-id=223 op=LOAD Dec 16 02:10:44.824000 audit[4569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4533 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339386532613764373062633361633562636436313537396466613563 Dec 16 02:10:44.824000 audit: BPF prog-id=224 op=LOAD Dec 16 02:10:44.824000 audit[4569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4533 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339386532613764373062633361633562636436313537396466613563 Dec 16 02:10:44.824000 audit: BPF prog-id=224 op=UNLOAD Dec 16 02:10:44.824000 audit[4569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4533 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339386532613764373062633361633562636436313537396466613563 Dec 16 02:10:44.824000 audit: BPF prog-id=223 op=UNLOAD Dec 16 02:10:44.824000 audit[4569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4533 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339386532613764373062633361633562636436313537396466613563 Dec 16 02:10:44.825000 audit: BPF prog-id=225 op=LOAD Dec 16 02:10:44.825000 audit[4569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4533 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339386532613764373062633361633562636436313537396466613563 Dec 16 02:10:44.850037 containerd[1662]: time="2025-12-16T02:10:44.849995492Z" level=info msg="StartContainer for \"c98e2a7d70bc3ac5bcd61579dfa5cfe204e0548f6e5e2461c2efc28b926f55e6\" returns successfully" Dec 16 02:10:44.998669 containerd[1662]: time="2025-12-16T02:10:44.998430657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:45.000249 containerd[1662]: time="2025-12-16T02:10:45.000196862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:10:45.000427 containerd[1662]: time="2025-12-16T02:10:45.000298783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:45.000602 kubelet[2916]: E1216 02:10:45.000563 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:45.001174 kubelet[2916]: E1216 02:10:45.000980 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:45.001174 kubelet[2916]: E1216 02:10:45.001080 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-687df49b8c-8hkjd_calico-system(8f7f3b6e-a739-4bf5-a67c-595cf0d67494): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:45.001174 kubelet[2916]: E1216 02:10:45.001117 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:10:45.562726 kubelet[2916]: E1216 02:10:45.562676 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:10:45.574523 kubelet[2916]: I1216 02:10:45.574440 2916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-55cxh" podStartSLOduration=43.574423386 podStartE2EDuration="43.574423386s" podCreationTimestamp="2025-12-16 02:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:10:45.571995179 +0000 UTC m=+47.240030361" watchObservedRunningTime="2025-12-16 02:10:45.574423386 +0000 UTC m=+47.242458568" Dec 16 02:10:45.586000 audit[4606]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:45.586000 audit[4606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffd762620 a2=0 a3=1 items=0 ppid=3054 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:45.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:45.591000 audit[4606]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:45.591000 audit[4606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffffd762620 a2=0 a3=1 items=0 ppid=3054 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:45.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:46.047103 systemd-networkd[1582]: cali35c0f41f241: Gained IPv6LL Dec 16 02:10:46.565650 kubelet[2916]: E1216 02:10:46.565416 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:10:46.614000 audit[4608]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=4608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:46.614000 audit[4608]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd927fb10 a2=0 a3=1 items=0 ppid=3054 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:46.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:46.623000 audit[4608]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=4608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:46.623000 audit[4608]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd927fb10 a2=0 a3=1 items=0 ppid=3054 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:46.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:46.687316 systemd-networkd[1582]: calie331ab6e8e9: Gained IPv6LL Dec 16 02:10:47.427379 containerd[1662]: time="2025-12-16T02:10:47.427299388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-b8vhz,Uid:621fd2d4-81d9-4021-8cd8-39b0addbde62,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:47.434140 containerd[1662]: time="2025-12-16T02:10:47.434108408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgqkd,Uid:451fe39a-cf01-4312-91ac-f3d2c7b640b1,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:47.436553 containerd[1662]: time="2025-12-16T02:10:47.436451015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd794ff8d-86jwz,Uid:9fa7981b-06dd-4022-adb6-f277629bbdff,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:10:47.567751 systemd-networkd[1582]: calid92cbb5c1b1: Link UP Dec 16 02:10:47.568468 systemd-networkd[1582]: calid92cbb5c1b1: Gained carrier Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.482 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0 goldmane-7c778bb748- calico-system 621fd2d4-81d9-4021-8cd8-39b0addbde62 855 0 2025-12-16 02:10:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-0-0-9-b4376e68e3 goldmane-7c778bb748-b8vhz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid92cbb5c1b1 [] [] }} ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Namespace="calico-system" Pod="goldmane-7c778bb748-b8vhz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.482 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Namespace="calico-system" Pod="goldmane-7c778bb748-b8vhz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.517 [INFO][4654] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" HandleID="k8s-pod-network.0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Workload="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.518 [INFO][4654] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" HandleID="k8s-pod-network.0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Workload="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b12c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-9-b4376e68e3", "pod":"goldmane-7c778bb748-b8vhz", "timestamp":"2025-12-16 02:10:47.51782982 +0000 UTC"}, Hostname:"ci-4547-0-0-9-b4376e68e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.518 [INFO][4654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.518 [INFO][4654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.518 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-9-b4376e68e3' Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.534 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.539 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.545 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.547 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.549 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.549 [INFO][4654] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.551 [INFO][4654] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.555 [INFO][4654] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.561 [INFO][4654] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.68/26] block=192.168.71.64/26 handle="k8s-pod-network.0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.561 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.68/26] handle="k8s-pod-network.0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.561 [INFO][4654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:47.584076 containerd[1662]: 2025-12-16 02:10:47.561 [INFO][4654] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.68/26] IPv6=[] ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" HandleID="k8s-pod-network.0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Workload="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" Dec 16 02:10:47.585321 containerd[1662]: 2025-12-16 02:10:47.563 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Namespace="calico-system" Pod="goldmane-7c778bb748-b8vhz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"621fd2d4-81d9-4021-8cd8-39b0addbde62", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"", Pod:"goldmane-7c778bb748-b8vhz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid92cbb5c1b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:47.585321 containerd[1662]: 2025-12-16 02:10:47.563 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.68/32] ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Namespace="calico-system" Pod="goldmane-7c778bb748-b8vhz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" Dec 16 02:10:47.585321 containerd[1662]: 2025-12-16 02:10:47.563 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid92cbb5c1b1 ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Namespace="calico-system" Pod="goldmane-7c778bb748-b8vhz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" Dec 16 02:10:47.585321 containerd[1662]: 2025-12-16 02:10:47.569 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Namespace="calico-system" Pod="goldmane-7c778bb748-b8vhz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" Dec 16 02:10:47.585321 containerd[1662]: 2025-12-16 02:10:47.569 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Namespace="calico-system" Pod="goldmane-7c778bb748-b8vhz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"621fd2d4-81d9-4021-8cd8-39b0addbde62", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c", Pod:"goldmane-7c778bb748-b8vhz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid92cbb5c1b1", MAC:"7e:b9:0b:e2:12:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:47.585321 containerd[1662]: 2025-12-16 02:10:47.582 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" Namespace="calico-system" Pod="goldmane-7c778bb748-b8vhz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-goldmane--7c778bb748--b8vhz-eth0" Dec 16 02:10:47.599000 audit[4689]: NETFILTER_CFG table=filter:131 family=2 entries=52 op=nft_register_chain pid=4689 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:47.603407 kernel: kauditd_printk_skb: 218 callbacks suppressed Dec 16 02:10:47.603469 kernel: audit: type=1325 audit(1765851047.599:682): table=filter:131 family=2 entries=52 op=nft_register_chain pid=4689 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:47.599000 audit[4689]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27556 a0=3 a1=ffffe1292d40 a2=0 a3=ffffbd1cdfa8 items=0 ppid=4121 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.608762 kernel: audit: type=1300 audit(1765851047.599:682): arch=c00000b7 syscall=211 success=yes exit=27556 a0=3 a1=ffffe1292d40 a2=0 a3=ffffbd1cdfa8 items=0 ppid=4121 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.599000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:47.609518 containerd[1662]: time="2025-12-16T02:10:47.609479215Z" level=info msg="connecting to shim 0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c" address="unix:///run/containerd/s/8978044749ac24558d847ead84b5c48d873c455e640d8e97deca84bb841178d7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:47.611261 kernel: audit: type=1327 audit(1765851047.599:682): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:47.634056 systemd[1]: Started cri-containerd-0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c.scope - libcontainer container 0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c. Dec 16 02:10:47.652000 audit: BPF prog-id=226 op=LOAD Dec 16 02:10:47.654962 kernel: audit: type=1334 audit(1765851047.652:683): prog-id=226 op=LOAD Dec 16 02:10:47.654000 audit: BPF prog-id=227 op=LOAD Dec 16 02:10:47.654000 audit[4708]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4698 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.659248 kernel: audit: type=1334 audit(1765851047.654:684): prog-id=227 op=LOAD Dec 16 02:10:47.659409 kernel: audit: type=1300 audit(1765851047.654:684): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4698 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363266623833643238623165643931346161663465373064376331 Dec 16 02:10:47.662900 kernel: audit: type=1327 audit(1765851047.654:684): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363266623833643238623165643931346161663465373064376331 Dec 16 02:10:47.663434 kernel: audit: type=1334 audit(1765851047.654:685): prog-id=227 op=UNLOAD Dec 16 02:10:47.654000 audit: BPF prog-id=227 op=UNLOAD Dec 16 02:10:47.654000 audit[4708]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4698 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.667886 kernel: audit: type=1300 audit(1765851047.654:685): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4698 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.667942 kernel: audit: type=1327 audit(1765851047.654:685): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363266623833643238623165643931346161663465373064376331 Dec 16 02:10:47.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363266623833643238623165643931346161663465373064376331 Dec 16 02:10:47.654000 audit: BPF prog-id=228 op=LOAD Dec 16 02:10:47.654000 audit[4708]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4698 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363266623833643238623165643931346161663465373064376331 Dec 16 02:10:47.658000 audit: BPF prog-id=229 op=LOAD Dec 16 02:10:47.658000 audit[4708]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4698 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363266623833643238623165643931346161663465373064376331 Dec 16 02:10:47.658000 audit: BPF prog-id=229 op=UNLOAD Dec 16 02:10:47.658000 audit[4708]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4698 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363266623833643238623165643931346161663465373064376331 Dec 16 02:10:47.658000 audit: BPF prog-id=228 op=UNLOAD Dec 16 02:10:47.658000 audit[4708]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4698 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363266623833643238623165643931346161663465373064376331 Dec 16 02:10:47.658000 audit: BPF prog-id=230 op=LOAD Dec 16 02:10:47.658000 audit[4708]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4698 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363266623833643238623165643931346161663465373064376331 Dec 16 02:10:47.681423 systemd-networkd[1582]: calibcf43d17d95: Link UP Dec 16 02:10:47.684395 systemd-networkd[1582]: calibcf43d17d95: Gained carrier Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.500 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0 csi-node-driver- calico-system 451fe39a-cf01-4312-91ac-f3d2c7b640b1 745 0 2025-12-16 02:10:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-0-0-9-b4376e68e3 csi-node-driver-sgqkd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibcf43d17d95 [] [] }} ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Namespace="calico-system" Pod="csi-node-driver-sgqkd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.500 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Namespace="calico-system" Pod="csi-node-driver-sgqkd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.532 [INFO][4664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" HandleID="k8s-pod-network.98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Workload="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.532 [INFO][4664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" HandleID="k8s-pod-network.98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Workload="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c490), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-9-b4376e68e3", "pod":"csi-node-driver-sgqkd", "timestamp":"2025-12-16 02:10:47.532062702 +0000 UTC"}, Hostname:"ci-4547-0-0-9-b4376e68e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.532 [INFO][4664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.561 [INFO][4664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.561 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-9-b4376e68e3' Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.635 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.642 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.649 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.651 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.654 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.654 [INFO][4664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.657 [INFO][4664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434 Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.664 [INFO][4664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.675 [INFO][4664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.69/26] block=192.168.71.64/26 handle="k8s-pod-network.98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.675 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.69/26] handle="k8s-pod-network.98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.675 [INFO][4664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:47.701993 containerd[1662]: 2025-12-16 02:10:47.675 [INFO][4664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.69/26] IPv6=[] ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" HandleID="k8s-pod-network.98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Workload="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" Dec 16 02:10:47.702655 containerd[1662]: 2025-12-16 02:10:47.678 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Namespace="calico-system" Pod="csi-node-driver-sgqkd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"451fe39a-cf01-4312-91ac-f3d2c7b640b1", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"", Pod:"csi-node-driver-sgqkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibcf43d17d95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:47.702655 containerd[1662]: 2025-12-16 02:10:47.678 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.69/32] ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Namespace="calico-system" Pod="csi-node-driver-sgqkd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" Dec 16 02:10:47.702655 containerd[1662]: 2025-12-16 02:10:47.678 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibcf43d17d95 ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Namespace="calico-system" Pod="csi-node-driver-sgqkd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" Dec 16 02:10:47.702655 containerd[1662]: 2025-12-16 02:10:47.684 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Namespace="calico-system" Pod="csi-node-driver-sgqkd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" Dec 16 02:10:47.702655 containerd[1662]: 2025-12-16 02:10:47.684 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Namespace="calico-system" Pod="csi-node-driver-sgqkd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"451fe39a-cf01-4312-91ac-f3d2c7b640b1", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434", Pod:"csi-node-driver-sgqkd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibcf43d17d95", MAC:"ba:4d:bd:c2:71:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:47.702655 containerd[1662]: 2025-12-16 02:10:47.700 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" Namespace="calico-system" Pod="csi-node-driver-sgqkd" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-csi--node--driver--sgqkd-eth0" Dec 16 02:10:47.722000 audit[4745]: NETFILTER_CFG table=filter:132 family=2 entries=48 op=nft_register_chain pid=4745 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:47.722000 audit[4745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23140 a0=3 a1=fffff421b2f0 a2=0 a3=ffff9396efa8 items=0 ppid=4121 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.722000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:47.727367 containerd[1662]: time="2025-12-16T02:10:47.727328329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-b8vhz,Uid:621fd2d4-81d9-4021-8cd8-39b0addbde62,Namespace:calico-system,Attempt:0,} returns sandbox id \"0862fb83d28b1ed914aaf4e70d7c15128e63a7947d325f1b07d0fe7bc046a10c\"" Dec 16 02:10:47.729972 containerd[1662]: time="2025-12-16T02:10:47.729853816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:10:47.747566 containerd[1662]: time="2025-12-16T02:10:47.747519389Z" level=info msg="connecting to shim 98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434" address="unix:///run/containerd/s/ef734fa95040b0b8bfdb171eb544c3127f5828e0c236f2f94d5bb422ee59fe7c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:47.777270 systemd[1]: Started cri-containerd-98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434.scope - libcontainer container 98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434. Dec 16 02:10:47.786745 systemd-networkd[1582]: caliddb9856c558: Link UP Dec 16 02:10:47.787364 systemd-networkd[1582]: caliddb9856c558: Gained carrier Dec 16 02:10:47.794000 audit: BPF prog-id=231 op=LOAD Dec 16 02:10:47.794000 audit: BPF prog-id=232 op=LOAD Dec 16 02:10:47.794000 audit[4772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4761 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623235316537633233346330343534373031623061376435646162 Dec 16 02:10:47.794000 audit: BPF prog-id=232 op=UNLOAD Dec 16 02:10:47.794000 audit[4772]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4761 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623235316537633233346330343534373031623061376435646162 Dec 16 02:10:47.794000 audit: BPF prog-id=233 op=LOAD Dec 16 02:10:47.794000 audit[4772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4761 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623235316537633233346330343534373031623061376435646162 Dec 16 02:10:47.795000 audit: BPF prog-id=234 op=LOAD Dec 16 02:10:47.795000 audit[4772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4761 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623235316537633233346330343534373031623061376435646162 Dec 16 02:10:47.795000 audit: BPF prog-id=234 op=UNLOAD Dec 16 02:10:47.795000 audit[4772]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4761 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623235316537633233346330343534373031623061376435646162 Dec 16 02:10:47.795000 audit: BPF prog-id=233 op=UNLOAD Dec 16 02:10:47.795000 audit[4772]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4761 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623235316537633233346330343534373031623061376435646162 Dec 16 02:10:47.795000 audit: BPF prog-id=235 op=LOAD Dec 16 02:10:47.795000 audit[4772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4761 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623235316537633233346330343534373031623061376435646162 Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.502 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0 calico-apiserver-6cd794ff8d- calico-apiserver 9fa7981b-06dd-4022-adb6-f277629bbdff 852 0 2025-12-16 02:10:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd794ff8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-9-b4376e68e3 calico-apiserver-6cd794ff8d-86jwz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliddb9856c558 [] [] }} ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-86jwz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.502 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-86jwz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.535 [INFO][4670] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" HandleID="k8s-pod-network.33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Workload="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.536 [INFO][4670] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" HandleID="k8s-pod-network.33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Workload="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-9-b4376e68e3", "pod":"calico-apiserver-6cd794ff8d-86jwz", "timestamp":"2025-12-16 02:10:47.535709153 +0000 UTC"}, Hostname:"ci-4547-0-0-9-b4376e68e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.536 [INFO][4670] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.676 [INFO][4670] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.676 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-9-b4376e68e3' Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.735 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.743 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.754 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.757 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.762 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.762 [INFO][4670] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.765 [INFO][4670] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.770 [INFO][4670] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.781 [INFO][4670] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.70/26] block=192.168.71.64/26 handle="k8s-pod-network.33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.781 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.70/26] handle="k8s-pod-network.33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.781 [INFO][4670] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:47.805856 containerd[1662]: 2025-12-16 02:10:47.781 [INFO][4670] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.70/26] IPv6=[] ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" HandleID="k8s-pod-network.33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Workload="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" Dec 16 02:10:47.807460 containerd[1662]: 2025-12-16 02:10:47.784 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-86jwz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0", GenerateName:"calico-apiserver-6cd794ff8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9fa7981b-06dd-4022-adb6-f277629bbdff", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd794ff8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"", Pod:"calico-apiserver-6cd794ff8d-86jwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliddb9856c558", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:47.807460 containerd[1662]: 2025-12-16 02:10:47.784 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.70/32] ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-86jwz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" Dec 16 02:10:47.807460 containerd[1662]: 2025-12-16 02:10:47.784 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddb9856c558 ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-86jwz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" Dec 16 02:10:47.807460 containerd[1662]: 2025-12-16 02:10:47.786 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-86jwz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" Dec 16 02:10:47.807460 containerd[1662]: 2025-12-16 02:10:47.789 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-86jwz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0", GenerateName:"calico-apiserver-6cd794ff8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9fa7981b-06dd-4022-adb6-f277629bbdff", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd794ff8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe", Pod:"calico-apiserver-6cd794ff8d-86jwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliddb9856c558", MAC:"b6:5d:38:98:dd:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:47.807460 containerd[1662]: 2025-12-16 02:10:47.804 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-86jwz" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--86jwz-eth0" Dec 16 02:10:47.825978 containerd[1662]: time="2025-12-16T02:10:47.825929225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgqkd,Uid:451fe39a-cf01-4312-91ac-f3d2c7b640b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"98b251e7c234c0454701b0a7d5dab0729cb7620ca44e5b8190c2c70f12b8b434\"" Dec 16 02:10:47.834000 audit[4806]: NETFILTER_CFG table=filter:133 family=2 entries=72 op=nft_register_chain pid=4806 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:47.834000 audit[4806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=35812 a0=3 a1=ffffc105f930 a2=0 a3=ffff9538dfa8 items=0 ppid=4121 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.834000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:47.839470 containerd[1662]: time="2025-12-16T02:10:47.839363945Z" level=info msg="connecting to shim 33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe" address="unix:///run/containerd/s/03d1db5516e7e33ab2518c7ad4b34f7ecc6b75ddc596fd76b214f167b8873fb2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:47.866068 systemd[1]: Started cri-containerd-33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe.scope - libcontainer container 33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe. Dec 16 02:10:47.875000 audit: BPF prog-id=236 op=LOAD Dec 16 02:10:47.876000 audit: BPF prog-id=237 op=LOAD Dec 16 02:10:47.876000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4815 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333623836666334393435616439393865643738396466353837303434 Dec 16 02:10:47.876000 audit: BPF prog-id=237 op=UNLOAD Dec 16 02:10:47.876000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4815 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333623836666334393435616439393865643738396466353837303434 Dec 16 02:10:47.876000 audit: BPF prog-id=238 op=LOAD Dec 16 02:10:47.876000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4815 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333623836666334393435616439393865643738396466353837303434 Dec 16 02:10:47.876000 audit: BPF prog-id=239 op=LOAD Dec 16 02:10:47.876000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4815 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333623836666334393435616439393865643738396466353837303434 Dec 16 02:10:47.876000 audit: BPF prog-id=239 op=UNLOAD Dec 16 02:10:47.876000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4815 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333623836666334393435616439393865643738396466353837303434 Dec 16 02:10:47.876000 audit: BPF prog-id=238 op=UNLOAD Dec 16 02:10:47.876000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4815 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333623836666334393435616439393865643738396466353837303434 Dec 16 02:10:47.876000 audit: BPF prog-id=240 op=LOAD Dec 16 02:10:47.876000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4815 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333623836666334393435616439393865643738396466353837303434 Dec 16 02:10:47.901310 containerd[1662]: time="2025-12-16T02:10:47.901263291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd794ff8d-86jwz,Uid:9fa7981b-06dd-4022-adb6-f277629bbdff,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"33b86fc4945ad998ed789df587044ebffa69bfd55cafe806903ad35a927891fe\"" Dec 16 02:10:48.094959 containerd[1662]: time="2025-12-16T02:10:48.094671791Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:48.097148 containerd[1662]: time="2025-12-16T02:10:48.097109639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:10:48.097217 containerd[1662]: time="2025-12-16T02:10:48.097190479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:48.097454 kubelet[2916]: E1216 02:10:48.097397 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:48.097454 kubelet[2916]: E1216 02:10:48.097446 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:48.097766 kubelet[2916]: E1216 02:10:48.097594 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-b8vhz_calico-system(621fd2d4-81d9-4021-8cd8-39b0addbde62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:48.097766 kubelet[2916]: E1216 02:10:48.097634 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:10:48.098062 containerd[1662]: time="2025-12-16T02:10:48.098037641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:10:48.421193 containerd[1662]: time="2025-12-16T02:10:48.421004571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:48.423590 containerd[1662]: time="2025-12-16T02:10:48.423493018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:10:48.423666 containerd[1662]: time="2025-12-16T02:10:48.423580779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:48.424355 kubelet[2916]: E1216 02:10:48.424323 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:48.424427 kubelet[2916]: E1216 02:10:48.424361 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:48.424680 kubelet[2916]: E1216 02:10:48.424470 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:48.424754 containerd[1662]: time="2025-12-16T02:10:48.424602942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:48.430927 containerd[1662]: time="2025-12-16T02:10:48.430891121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v65lh,Uid:8d5776d1-6402-4926-ae94-65987d42425c,Namespace:kube-system,Attempt:0,}" Dec 16 02:10:48.433726 containerd[1662]: time="2025-12-16T02:10:48.433675169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd794ff8d-6hwpw,Uid:bf9a6289-a4bc-424f-93a6-cea4264ad5e4,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:10:48.573974 systemd-networkd[1582]: cali8e4e5b6e47b: Link UP Dec 16 02:10:48.574375 systemd-networkd[1582]: cali8e4e5b6e47b: Gained carrier Dec 16 02:10:48.577628 kubelet[2916]: E1216 02:10:48.577577 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.501 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0 calico-apiserver-6cd794ff8d- calico-apiserver bf9a6289-a4bc-424f-93a6-cea4264ad5e4 854 0 2025-12-16 02:10:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd794ff8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-9-b4376e68e3 calico-apiserver-6cd794ff8d-6hwpw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e4e5b6e47b [] [] }} ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-6hwpw" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.501 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-6hwpw" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.524 [INFO][4883] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" HandleID="k8s-pod-network.ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Workload="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.524 [INFO][4883] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" HandleID="k8s-pod-network.ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Workload="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001182c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-9-b4376e68e3", "pod":"calico-apiserver-6cd794ff8d-6hwpw", "timestamp":"2025-12-16 02:10:48.524426041 +0000 UTC"}, Hostname:"ci-4547-0-0-9-b4376e68e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.524 [INFO][4883] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.524 [INFO][4883] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.524 [INFO][4883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-9-b4376e68e3' Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.534 [INFO][4883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.540 [INFO][4883] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.547 [INFO][4883] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.550 [INFO][4883] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.553 [INFO][4883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.553 [INFO][4883] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.555 [INFO][4883] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.562 [INFO][4883] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.569 [INFO][4883] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.71/26] block=192.168.71.64/26 handle="k8s-pod-network.ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.569 [INFO][4883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.71/26] handle="k8s-pod-network.ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.569 [INFO][4883] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:48.596294 containerd[1662]: 2025-12-16 02:10:48.569 [INFO][4883] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.71/26] IPv6=[] ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" HandleID="k8s-pod-network.ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Workload="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" Dec 16 02:10:48.596911 containerd[1662]: 2025-12-16 02:10:48.571 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-6hwpw" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0", GenerateName:"calico-apiserver-6cd794ff8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf9a6289-a4bc-424f-93a6-cea4264ad5e4", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd794ff8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"", Pod:"calico-apiserver-6cd794ff8d-6hwpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e4e5b6e47b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:48.596911 containerd[1662]: 2025-12-16 02:10:48.572 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.71/32] ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-6hwpw" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" Dec 16 02:10:48.596911 containerd[1662]: 2025-12-16 02:10:48.572 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e4e5b6e47b ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-6hwpw" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" Dec 16 02:10:48.596911 containerd[1662]: 2025-12-16 02:10:48.574 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-6hwpw" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" Dec 16 02:10:48.596911 containerd[1662]: 2025-12-16 02:10:48.575 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-6hwpw" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0", GenerateName:"calico-apiserver-6cd794ff8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf9a6289-a4bc-424f-93a6-cea4264ad5e4", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd794ff8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd", Pod:"calico-apiserver-6cd794ff8d-6hwpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e4e5b6e47b", MAC:"22:f0:83:15:1d:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:48.596911 containerd[1662]: 2025-12-16 02:10:48.591 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" Namespace="calico-apiserver" Pod="calico-apiserver-6cd794ff8d-6hwpw" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-calico--apiserver--6cd794ff8d--6hwpw-eth0" Dec 16 02:10:48.604000 audit[4907]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:48.604000 audit[4907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd69e4530 a2=0 a3=1 items=0 ppid=3054 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:48.615000 audit[4907]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=4907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:48.615000 audit[4907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd69e4530 a2=0 a3=1 items=0 ppid=3054 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:48.625194 containerd[1662]: time="2025-12-16T02:10:48.624612862Z" level=info msg="connecting to shim ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd" address="unix:///run/containerd/s/29a935c60832bf359052cb1a7f735ebb1aee67c5567ceb02d281d6025ad11f21" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:48.630000 audit[4922]: NETFILTER_CFG table=filter:136 family=2 entries=59 op=nft_register_chain pid=4922 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:48.630000 audit[4922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29476 a0=3 a1=ffffd50df240 a2=0 a3=ffffbb0fafa8 items=0 ppid=4121 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.630000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:48.660070 systemd[1]: Started cri-containerd-ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd.scope - libcontainer container ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd. Dec 16 02:10:48.673000 audit: BPF prog-id=241 op=LOAD Dec 16 02:10:48.674000 audit: BPF prog-id=242 op=LOAD Dec 16 02:10:48.674000 audit[4928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4916 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565386533613663343335323736666237356232356365363031356431 Dec 16 02:10:48.675000 audit: BPF prog-id=242 op=UNLOAD Dec 16 02:10:48.675000 audit[4928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565386533613663343335323736666237356232356365363031356431 Dec 16 02:10:48.675000 audit: BPF prog-id=243 op=LOAD Dec 16 02:10:48.675000 audit[4928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4916 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565386533613663343335323736666237356232356365363031356431 Dec 16 02:10:48.675000 audit: BPF prog-id=244 op=LOAD Dec 16 02:10:48.675000 audit[4928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4916 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565386533613663343335323736666237356232356365363031356431 Dec 16 02:10:48.676000 audit: BPF prog-id=244 op=UNLOAD Dec 16 02:10:48.676000 audit[4928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565386533613663343335323736666237356232356365363031356431 Dec 16 02:10:48.676000 audit: BPF prog-id=243 op=UNLOAD Dec 16 02:10:48.676000 audit[4928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565386533613663343335323736666237356232356365363031356431 Dec 16 02:10:48.676000 audit: BPF prog-id=245 op=LOAD Dec 16 02:10:48.676000 audit[4928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4916 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565386533613663343335323736666237356232356365363031356431 Dec 16 02:10:48.679532 systemd-networkd[1582]: caliab12fe2d1b9: Link UP Dec 16 02:10:48.679710 systemd-networkd[1582]: caliab12fe2d1b9: Gained carrier Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.507 [INFO][4852] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0 coredns-66bc5c9577- kube-system 8d5776d1-6402-4926-ae94-65987d42425c 851 0 2025-12-16 02:10:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-9-b4376e68e3 coredns-66bc5c9577-v65lh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab12fe2d1b9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Namespace="kube-system" Pod="coredns-66bc5c9577-v65lh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.507 [INFO][4852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Namespace="kube-system" Pod="coredns-66bc5c9577-v65lh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.531 [INFO][4889] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" HandleID="k8s-pod-network.335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Workload="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.532 [INFO][4889] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" HandleID="k8s-pod-network.335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Workload="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059eaa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-9-b4376e68e3", "pod":"coredns-66bc5c9577-v65lh", "timestamp":"2025-12-16 02:10:48.531604863 +0000 UTC"}, Hostname:"ci-4547-0-0-9-b4376e68e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.532 [INFO][4889] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.569 [INFO][4889] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.569 [INFO][4889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-9-b4376e68e3' Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.637 [INFO][4889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.643 [INFO][4889] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.649 [INFO][4889] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.654 [INFO][4889] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.657 [INFO][4889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.657 [INFO][4889] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.659 [INFO][4889] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.664 [INFO][4889] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.672 [INFO][4889] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.72/26] block=192.168.71.64/26 handle="k8s-pod-network.335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.672 [INFO][4889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.72/26] handle="k8s-pod-network.335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" host="ci-4547-0-0-9-b4376e68e3" Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.672 [INFO][4889] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:48.692028 containerd[1662]: 2025-12-16 02:10:48.672 [INFO][4889] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.72/26] IPv6=[] ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" HandleID="k8s-pod-network.335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Workload="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" Dec 16 02:10:48.692525 containerd[1662]: 2025-12-16 02:10:48.677 [INFO][4852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Namespace="kube-system" Pod="coredns-66bc5c9577-v65lh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8d5776d1-6402-4926-ae94-65987d42425c", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"", Pod:"coredns-66bc5c9577-v65lh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab12fe2d1b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:48.692525 containerd[1662]: 2025-12-16 02:10:48.677 [INFO][4852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.72/32] ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Namespace="kube-system" Pod="coredns-66bc5c9577-v65lh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" Dec 16 02:10:48.692525 containerd[1662]: 2025-12-16 02:10:48.677 [INFO][4852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab12fe2d1b9 ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Namespace="kube-system" Pod="coredns-66bc5c9577-v65lh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" Dec 16 02:10:48.692525 containerd[1662]: 2025-12-16 02:10:48.679 [INFO][4852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Namespace="kube-system" Pod="coredns-66bc5c9577-v65lh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" Dec 16 02:10:48.692525 containerd[1662]: 2025-12-16 02:10:48.679 [INFO][4852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Namespace="kube-system" Pod="coredns-66bc5c9577-v65lh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8d5776d1-6402-4926-ae94-65987d42425c", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-9-b4376e68e3", ContainerID:"335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b", Pod:"coredns-66bc5c9577-v65lh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab12fe2d1b9", MAC:"aa:0b:23:42:b9:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:48.692756 containerd[1662]: 2025-12-16 02:10:48.689 [INFO][4852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" Namespace="kube-system" Pod="coredns-66bc5c9577-v65lh" WorkloadEndpoint="ci--4547--0--0--9--b4376e68e3-k8s-coredns--66bc5c9577--v65lh-eth0" Dec 16 02:10:48.710000 audit[4958]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=4958 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:48.710000 audit[4958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22688 a0=3 a1=fffff2e0a400 a2=0 a3=ffff93898fa8 items=0 ppid=4121 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.710000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:48.721443 containerd[1662]: time="2025-12-16T02:10:48.721333952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd794ff8d-6hwpw,Uid:bf9a6289-a4bc-424f-93a6-cea4264ad5e4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ee8e3a6c435276fb75b25ce6015d1eecd8a8dc478189631d37d06a95337988cd\"" Dec 16 02:10:48.727292 containerd[1662]: time="2025-12-16T02:10:48.727251530Z" level=info msg="connecting to shim 335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b" address="unix:///run/containerd/s/48dd4fcec6bc8f968acef079609bd0955b9fffe795736d88a93094eb337004ea" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:48.755060 systemd[1]: Started cri-containerd-335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b.scope - libcontainer container 335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b. Dec 16 02:10:48.759372 containerd[1662]: time="2025-12-16T02:10:48.759316306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:48.762841 containerd[1662]: time="2025-12-16T02:10:48.762772677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:48.763285 containerd[1662]: time="2025-12-16T02:10:48.762815437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:48.763362 kubelet[2916]: E1216 02:10:48.763228 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:48.763456 kubelet[2916]: E1216 02:10:48.763416 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:48.763929 kubelet[2916]: E1216 02:10:48.763575 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cd794ff8d-86jwz_calico-apiserver(9fa7981b-06dd-4022-adb6-f277629bbdff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:48.763929 kubelet[2916]: E1216 02:10:48.763615 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:10:48.764307 containerd[1662]: time="2025-12-16T02:10:48.764280201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:10:48.769000 audit: BPF prog-id=246 op=LOAD Dec 16 02:10:48.770000 audit: BPF prog-id=247 op=LOAD Dec 16 02:10:48.770000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333353731356661396363313037396638373261663231626466343732 Dec 16 02:10:48.770000 audit: BPF prog-id=247 op=UNLOAD Dec 16 02:10:48.770000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333353731356661396363313037396638373261663231626466343732 Dec 16 02:10:48.770000 audit: BPF prog-id=248 op=LOAD Dec 16 02:10:48.770000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333353731356661396363313037396638373261663231626466343732 Dec 16 02:10:48.770000 audit: BPF prog-id=249 op=LOAD Dec 16 02:10:48.770000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333353731356661396363313037396638373261663231626466343732 Dec 16 02:10:48.770000 audit: BPF prog-id=249 op=UNLOAD Dec 16 02:10:48.770000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333353731356661396363313037396638373261663231626466343732 Dec 16 02:10:48.770000 audit: BPF prog-id=248 op=UNLOAD Dec 16 02:10:48.770000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333353731356661396363313037396638373261663231626466343732 Dec 16 02:10:48.770000 audit: BPF prog-id=250 op=LOAD Dec 16 02:10:48.770000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333353731356661396363313037396638373261663231626466343732 Dec 16 02:10:48.794713 containerd[1662]: time="2025-12-16T02:10:48.794673333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v65lh,Uid:8d5776d1-6402-4926-ae94-65987d42425c,Namespace:kube-system,Attempt:0,} returns sandbox id \"335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b\"" Dec 16 02:10:48.799643 containerd[1662]: time="2025-12-16T02:10:48.799606467Z" level=info msg="CreateContainer within sandbox \"335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:10:48.808946 containerd[1662]: time="2025-12-16T02:10:48.808322653Z" level=info msg="Container a99f13cbc50ae48aeec2504ecb945de6bd1babfcf58b2da7b8e866dbd7e88f75: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:48.814425 containerd[1662]: time="2025-12-16T02:10:48.814384672Z" level=info msg="CreateContainer within sandbox \"335715fa9cc1079f872af21bdf4729f67800b322d5c7d5c4643e1e7bfe19dd4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a99f13cbc50ae48aeec2504ecb945de6bd1babfcf58b2da7b8e866dbd7e88f75\"" Dec 16 02:10:48.815312 containerd[1662]: time="2025-12-16T02:10:48.815266594Z" level=info msg="StartContainer for \"a99f13cbc50ae48aeec2504ecb945de6bd1babfcf58b2da7b8e866dbd7e88f75\"" Dec 16 02:10:48.816404 containerd[1662]: time="2025-12-16T02:10:48.816375358Z" level=info msg="connecting to shim a99f13cbc50ae48aeec2504ecb945de6bd1babfcf58b2da7b8e866dbd7e88f75" address="unix:///run/containerd/s/48dd4fcec6bc8f968acef079609bd0955b9fffe795736d88a93094eb337004ea" protocol=ttrpc version=3 Dec 16 02:10:48.835101 systemd[1]: Started cri-containerd-a99f13cbc50ae48aeec2504ecb945de6bd1babfcf58b2da7b8e866dbd7e88f75.scope - libcontainer container a99f13cbc50ae48aeec2504ecb945de6bd1babfcf58b2da7b8e866dbd7e88f75. Dec 16 02:10:48.845000 audit: BPF prog-id=251 op=LOAD Dec 16 02:10:48.845000 audit: BPF prog-id=252 op=LOAD Dec 16 02:10:48.845000 audit[5009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4972 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396631336362633530616534386165656332353034656362393435 Dec 16 02:10:48.845000 audit: BPF prog-id=252 op=UNLOAD Dec 16 02:10:48.845000 audit[5009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396631336362633530616534386165656332353034656362393435 Dec 16 02:10:48.845000 audit: BPF prog-id=253 op=LOAD Dec 16 02:10:48.845000 audit[5009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4972 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396631336362633530616534386165656332353034656362393435 Dec 16 02:10:48.845000 audit: BPF prog-id=254 op=LOAD Dec 16 02:10:48.845000 audit[5009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4972 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396631336362633530616534386165656332353034656362393435 Dec 16 02:10:48.845000 audit: BPF prog-id=254 op=UNLOAD Dec 16 02:10:48.845000 audit[5009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396631336362633530616534386165656332353034656362393435 Dec 16 02:10:48.845000 audit: BPF prog-id=253 op=UNLOAD Dec 16 02:10:48.845000 audit[5009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396631336362633530616534386165656332353034656362393435 Dec 16 02:10:48.846000 audit: BPF prog-id=255 op=LOAD Dec 16 02:10:48.846000 audit[5009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4972 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139396631336362633530616534386165656332353034656362393435 Dec 16 02:10:48.863661 containerd[1662]: time="2025-12-16T02:10:48.863626019Z" level=info msg="StartContainer for \"a99f13cbc50ae48aeec2504ecb945de6bd1babfcf58b2da7b8e866dbd7e88f75\" returns successfully" Dec 16 02:10:48.990989 systemd-networkd[1582]: calid92cbb5c1b1: Gained IPv6LL Dec 16 02:10:49.100151 containerd[1662]: time="2025-12-16T02:10:49.100044929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:49.102980 containerd[1662]: time="2025-12-16T02:10:49.101787934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:10:49.102980 containerd[1662]: time="2025-12-16T02:10:49.101874935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:49.103147 kubelet[2916]: E1216 02:10:49.103094 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:49.103147 kubelet[2916]: E1216 02:10:49.103146 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:49.103523 kubelet[2916]: E1216 02:10:49.103295 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:49.103523 kubelet[2916]: E1216 02:10:49.103332 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:49.103975 containerd[1662]: time="2025-12-16T02:10:49.103913061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:49.311095 systemd-networkd[1582]: calibcf43d17d95: Gained IPv6LL Dec 16 02:10:49.441240 containerd[1662]: time="2025-12-16T02:10:49.441176393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:49.442879 containerd[1662]: time="2025-12-16T02:10:49.442762958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:49.442879 containerd[1662]: time="2025-12-16T02:10:49.442802198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:49.443008 kubelet[2916]: E1216 02:10:49.442971 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:49.443050 kubelet[2916]: E1216 02:10:49.443016 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:49.443099 kubelet[2916]: E1216 02:10:49.443080 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cd794ff8d-6hwpw_calico-apiserver(bf9a6289-a4bc-424f-93a6-cea4264ad5e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:49.443176 kubelet[2916]: E1216 02:10:49.443116 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:10:49.566992 systemd-networkd[1582]: caliddb9856c558: Gained IPv6LL Dec 16 02:10:49.585132 kubelet[2916]: E1216 02:10:49.585088 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:10:49.585743 kubelet[2916]: E1216 02:10:49.585714 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:10:49.586508 kubelet[2916]: E1216 02:10:49.586476 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:10:49.587177 kubelet[2916]: E1216 02:10:49.587149 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:10:49.598133 kubelet[2916]: I1216 02:10:49.597435 2916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v65lh" podStartSLOduration=47.597421822 podStartE2EDuration="47.597421822s" podCreationTimestamp="2025-12-16 02:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:10:49.596896101 +0000 UTC m=+51.264931203" watchObservedRunningTime="2025-12-16 02:10:49.597421822 +0000 UTC m=+51.265456964" Dec 16 02:10:49.636000 audit[5043]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:49.636000 audit[5043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe4f709c0 a2=0 a3=1 items=0 ppid=3054 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:49.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:49.646000 audit[5043]: NETFILTER_CFG table=nat:139 family=2 entries=44 op=nft_register_rule pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:49.646000 audit[5043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe4f709c0 a2=0 a3=1 items=0 ppid=3054 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:49.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:49.951045 systemd-networkd[1582]: caliab12fe2d1b9: Gained IPv6LL Dec 16 02:10:50.335109 systemd-networkd[1582]: cali8e4e5b6e47b: Gained IPv6LL Dec 16 02:10:50.587921 kubelet[2916]: E1216 02:10:50.587728 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:10:50.695000 audit[5045]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:50.695000 audit[5045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffca20ced0 a2=0 a3=1 items=0 ppid=3054 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:50.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:50.709000 audit[5045]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:50.709000 audit[5045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffca20ced0 a2=0 a3=1 items=0 ppid=3054 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:50.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:56.430118 containerd[1662]: time="2025-12-16T02:10:56.429970772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:10:56.843322 containerd[1662]: time="2025-12-16T02:10:56.843089532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:56.844545 containerd[1662]: time="2025-12-16T02:10:56.844507456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:10:56.844595 containerd[1662]: time="2025-12-16T02:10:56.844549416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:56.844807 kubelet[2916]: E1216 02:10:56.844752 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:56.844807 kubelet[2916]: E1216 02:10:56.844795 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:56.845138 kubelet[2916]: E1216 02:10:56.844882 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-78cff6db84-5gv4l_calico-system(1d880e3b-9ee9-4ee7-a578-c81cb365da0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:56.845997 containerd[1662]: time="2025-12-16T02:10:56.845787260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:10:57.164571 containerd[1662]: time="2025-12-16T02:10:57.164500337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:57.166276 containerd[1662]: time="2025-12-16T02:10:57.166238022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:10:57.166386 containerd[1662]: time="2025-12-16T02:10:57.166311742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:57.166526 kubelet[2916]: E1216 02:10:57.166473 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:57.166526 kubelet[2916]: E1216 02:10:57.166525 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:57.166609 kubelet[2916]: E1216 02:10:57.166593 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-78cff6db84-5gv4l_calico-system(1d880e3b-9ee9-4ee7-a578-c81cb365da0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:57.166663 kubelet[2916]: E1216 02:10:57.166630 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:11:01.422488 containerd[1662]: time="2025-12-16T02:11:01.422446678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:11:01.761805 containerd[1662]: time="2025-12-16T02:11:01.761571296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:01.763377 containerd[1662]: time="2025-12-16T02:11:01.763270741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:11:01.763377 containerd[1662]: time="2025-12-16T02:11:01.763350101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:01.763588 kubelet[2916]: E1216 02:11:01.763549 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:11:01.763962 kubelet[2916]: E1216 02:11:01.763597 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:11:01.763962 kubelet[2916]: E1216 02:11:01.763670 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-687df49b8c-8hkjd_calico-system(8f7f3b6e-a739-4bf5-a67c-595cf0d67494): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:01.763962 kubelet[2916]: E1216 02:11:01.763705 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:11:02.428768 containerd[1662]: time="2025-12-16T02:11:02.427954896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:11:02.762951 containerd[1662]: time="2025-12-16T02:11:02.762784621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:02.764341 containerd[1662]: time="2025-12-16T02:11:02.764298466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:11:02.764454 containerd[1662]: time="2025-12-16T02:11:02.764395826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:02.764542 kubelet[2916]: E1216 02:11:02.764502 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:11:02.765834 kubelet[2916]: E1216 02:11:02.764552 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:11:02.765834 kubelet[2916]: E1216 02:11:02.764730 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:02.766006 containerd[1662]: time="2025-12-16T02:11:02.765322829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:11:03.097292 containerd[1662]: time="2025-12-16T02:11:03.097058105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:03.098547 containerd[1662]: time="2025-12-16T02:11:03.098510509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:11:03.098667 containerd[1662]: time="2025-12-16T02:11:03.098590989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:03.098750 kubelet[2916]: E1216 02:11:03.098709 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:11:03.098982 kubelet[2916]: E1216 02:11:03.098756 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:11:03.098982 kubelet[2916]: E1216 02:11:03.098905 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-b8vhz_calico-system(621fd2d4-81d9-4021-8cd8-39b0addbde62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:03.098982 kubelet[2916]: E1216 02:11:03.098942 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:11:03.099181 containerd[1662]: time="2025-12-16T02:11:03.099148831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:11:03.438198 containerd[1662]: time="2025-12-16T02:11:03.438089649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:03.439582 containerd[1662]: time="2025-12-16T02:11:03.439526533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:11:03.439691 containerd[1662]: time="2025-12-16T02:11:03.439565653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:03.439800 kubelet[2916]: E1216 02:11:03.439750 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:11:03.439800 kubelet[2916]: E1216 02:11:03.439794 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:11:03.439977 kubelet[2916]: E1216 02:11:03.439939 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:03.440035 kubelet[2916]: E1216 02:11:03.440009 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:11:03.440421 containerd[1662]: time="2025-12-16T02:11:03.440124575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:11:03.802215 containerd[1662]: time="2025-12-16T02:11:03.801993821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:03.803714 containerd[1662]: time="2025-12-16T02:11:03.803671786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:11:03.803790 containerd[1662]: time="2025-12-16T02:11:03.803715986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:03.803952 kubelet[2916]: E1216 02:11:03.803896 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:03.803952 kubelet[2916]: E1216 02:11:03.803944 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:03.804239 kubelet[2916]: E1216 02:11:03.804016 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cd794ff8d-6hwpw_calico-apiserver(bf9a6289-a4bc-424f-93a6-cea4264ad5e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:03.804239 kubelet[2916]: E1216 02:11:03.804051 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:11:04.425935 containerd[1662]: time="2025-12-16T02:11:04.425846534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:11:04.762335 containerd[1662]: time="2025-12-16T02:11:04.762118863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:04.763596 containerd[1662]: time="2025-12-16T02:11:04.763550067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:11:04.763640 containerd[1662]: time="2025-12-16T02:11:04.763583987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:04.763784 kubelet[2916]: E1216 02:11:04.763745 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:04.763836 kubelet[2916]: E1216 02:11:04.763788 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:04.763901 kubelet[2916]: E1216 02:11:04.763881 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cd794ff8d-86jwz_calico-apiserver(9fa7981b-06dd-4022-adb6-f277629bbdff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:04.763930 kubelet[2916]: E1216 02:11:04.763915 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:11:10.423904 kubelet[2916]: E1216 02:11:10.423837 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:11:13.422567 kubelet[2916]: E1216 02:11:13.422517 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:11:15.425803 kubelet[2916]: E1216 02:11:15.425719 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:11:16.422299 kubelet[2916]: E1216 02:11:16.422228 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:11:16.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.26.207:22-143.110.190.151:33464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:16.875495 systemd[1]: Started sshd@9-10.0.26.207:22-143.110.190.151:33464.service - OpenSSH per-connection server daemon (143.110.190.151:33464). Dec 16 02:11:16.876541 kernel: kauditd_printk_skb: 155 callbacks suppressed Dec 16 02:11:16.876579 kernel: audit: type=1130 audit(1765851076.874:741): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.26.207:22-143.110.190.151:33464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:16.903562 sshd[5098]: banner exchange: Connection from 143.110.190.151 port 33464: invalid format Dec 16 02:11:16.904139 systemd[1]: sshd@9-10.0.26.207:22-143.110.190.151:33464.service: Deactivated successfully. Dec 16 02:11:16.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.26.207:22-143.110.190.151:33464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:16.907891 kernel: audit: type=1131 audit(1765851076.903:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.26.207:22-143.110.190.151:33464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:17.186201 systemd[1]: Started sshd@10-10.0.26.207:22-143.110.190.151:33468.service - OpenSSH per-connection server daemon (143.110.190.151:33468). Dec 16 02:11:17.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.26.207:22-143.110.190.151:33468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:17.191901 kernel: audit: type=1130 audit(1765851077.185:743): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.26.207:22-143.110.190.151:33468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:17.225246 sshd[5103]: banner exchange: Connection from 143.110.190.151 port 33468: invalid format Dec 16 02:11:17.225910 systemd[1]: sshd@10-10.0.26.207:22-143.110.190.151:33468.service: Deactivated successfully. Dec 16 02:11:17.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.26.207:22-143.110.190.151:33468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:17.230903 kernel: audit: type=1131 audit(1765851077.225:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.26.207:22-143.110.190.151:33468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:18.423266 kubelet[2916]: E1216 02:11:18.422935 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:11:19.422242 kubelet[2916]: E1216 02:11:19.422162 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:11:25.423566 containerd[1662]: time="2025-12-16T02:11:25.423514843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:11:25.746138 containerd[1662]: time="2025-12-16T02:11:25.745877451Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:25.747296 containerd[1662]: time="2025-12-16T02:11:25.747231815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:11:25.747398 containerd[1662]: time="2025-12-16T02:11:25.747329495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:25.747565 kubelet[2916]: E1216 02:11:25.747504 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:11:25.747565 kubelet[2916]: E1216 02:11:25.747555 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:11:25.748015 kubelet[2916]: E1216 02:11:25.747630 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-78cff6db84-5gv4l_calico-system(1d880e3b-9ee9-4ee7-a578-c81cb365da0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:25.750295 containerd[1662]: time="2025-12-16T02:11:25.750257864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:11:26.094361 containerd[1662]: time="2025-12-16T02:11:26.094151336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:26.095912 containerd[1662]: time="2025-12-16T02:11:26.095828022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:11:26.097068 containerd[1662]: time="2025-12-16T02:11:26.095900262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:26.097209 kubelet[2916]: E1216 02:11:26.097173 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:11:26.097319 kubelet[2916]: E1216 02:11:26.097214 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:11:26.097319 kubelet[2916]: E1216 02:11:26.097285 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-78cff6db84-5gv4l_calico-system(1d880e3b-9ee9-4ee7-a578-c81cb365da0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:26.097374 kubelet[2916]: E1216 02:11:26.097320 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:11:27.422831 containerd[1662]: time="2025-12-16T02:11:27.422756365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:11:27.761015 containerd[1662]: time="2025-12-16T02:11:27.760874180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:27.762474 containerd[1662]: time="2025-12-16T02:11:27.762426344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:11:27.762561 containerd[1662]: time="2025-12-16T02:11:27.762464824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:27.762699 kubelet[2916]: E1216 02:11:27.762660 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:27.762984 kubelet[2916]: E1216 02:11:27.762707 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:27.762984 kubelet[2916]: E1216 02:11:27.762778 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cd794ff8d-86jwz_calico-apiserver(9fa7981b-06dd-4022-adb6-f277629bbdff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:27.762984 kubelet[2916]: E1216 02:11:27.762806 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:11:28.423698 containerd[1662]: time="2025-12-16T02:11:28.423546529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:11:28.766512 containerd[1662]: time="2025-12-16T02:11:28.766389998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:28.768096 containerd[1662]: time="2025-12-16T02:11:28.768046763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:11:28.768196 containerd[1662]: time="2025-12-16T02:11:28.768143963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:28.768433 kubelet[2916]: E1216 02:11:28.768380 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:11:28.769285 kubelet[2916]: E1216 02:11:28.768937 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:11:28.769285 kubelet[2916]: E1216 02:11:28.769053 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-687df49b8c-8hkjd_calico-system(8f7f3b6e-a739-4bf5-a67c-595cf0d67494): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:28.769285 kubelet[2916]: E1216 02:11:28.769085 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:11:29.424107 containerd[1662]: time="2025-12-16T02:11:29.423833091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:11:29.752007 containerd[1662]: time="2025-12-16T02:11:29.751891436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:29.754226 containerd[1662]: time="2025-12-16T02:11:29.754175083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:11:29.754309 containerd[1662]: time="2025-12-16T02:11:29.754274443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:29.754502 kubelet[2916]: E1216 02:11:29.754466 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:11:29.754553 kubelet[2916]: E1216 02:11:29.754513 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:11:29.754696 kubelet[2916]: E1216 02:11:29.754675 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:29.755195 containerd[1662]: time="2025-12-16T02:11:29.755130726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:11:30.102241 containerd[1662]: time="2025-12-16T02:11:30.102114887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:30.103924 containerd[1662]: time="2025-12-16T02:11:30.103877213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:11:30.104190 containerd[1662]: time="2025-12-16T02:11:30.103923933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:30.104275 kubelet[2916]: E1216 02:11:30.104126 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:11:30.104275 kubelet[2916]: E1216 02:11:30.104193 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:11:30.105256 kubelet[2916]: E1216 02:11:30.104548 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-b8vhz_calico-system(621fd2d4-81d9-4021-8cd8-39b0addbde62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:30.105256 kubelet[2916]: E1216 02:11:30.104586 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:11:30.105317 containerd[1662]: time="2025-12-16T02:11:30.104737495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:11:30.433502 containerd[1662]: time="2025-12-16T02:11:30.433339882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:30.435594 containerd[1662]: time="2025-12-16T02:11:30.435485488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:11:30.435677 containerd[1662]: time="2025-12-16T02:11:30.435567488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:30.435811 kubelet[2916]: E1216 02:11:30.435704 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:11:30.435811 kubelet[2916]: E1216 02:11:30.435798 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:11:30.435918 kubelet[2916]: E1216 02:11:30.435902 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:30.436313 kubelet[2916]: E1216 02:11:30.436278 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:11:32.424112 containerd[1662]: time="2025-12-16T02:11:32.424050657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:11:32.757505 containerd[1662]: time="2025-12-16T02:11:32.757334698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:32.758856 containerd[1662]: time="2025-12-16T02:11:32.758791142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:11:32.758945 containerd[1662]: time="2025-12-16T02:11:32.758876142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:32.759120 kubelet[2916]: E1216 02:11:32.759080 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:32.759843 kubelet[2916]: E1216 02:11:32.759417 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:32.759843 kubelet[2916]: E1216 02:11:32.759496 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cd794ff8d-6hwpw_calico-apiserver(bf9a6289-a4bc-424f-93a6-cea4264ad5e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:32.759843 kubelet[2916]: E1216 02:11:32.759525 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:11:39.423190 kubelet[2916]: E1216 02:11:39.423147 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:11:39.423676 kubelet[2916]: E1216 02:11:39.423302 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:11:39.423676 kubelet[2916]: E1216 02:11:39.423389 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:11:41.423340 kubelet[2916]: E1216 02:11:41.423251 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:11:44.425208 kubelet[2916]: E1216 02:11:44.425157 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:11:45.422884 kubelet[2916]: E1216 02:11:45.422808 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:11:50.424933 kubelet[2916]: E1216 02:11:50.424827 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:11:51.422555 kubelet[2916]: E1216 02:11:51.422506 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:11:52.423658 kubelet[2916]: E1216 02:11:52.423411 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:11:52.424142 kubelet[2916]: E1216 02:11:52.423999 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:11:56.423134 kubelet[2916]: E1216 02:11:56.423089 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:11:57.422773 kubelet[2916]: E1216 02:11:57.422660 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:12:02.423447 kubelet[2916]: E1216 02:12:02.423374 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:12:02.423447 kubelet[2916]: E1216 02:12:02.423007 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:12:05.422276 kubelet[2916]: E1216 02:12:05.422068 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:12:05.423072 kubelet[2916]: E1216 02:12:05.423025 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:12:07.424147 kubelet[2916]: E1216 02:12:07.424071 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:12:09.422579 kubelet[2916]: E1216 02:12:09.422527 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:12:15.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.26.207:22-139.178.68.195:45640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:15.311722 systemd[1]: Started sshd@11-10.0.26.207:22-139.178.68.195:45640.service - OpenSSH per-connection server daemon (139.178.68.195:45640). Dec 16 02:12:15.315959 kernel: audit: type=1130 audit(1765851135.310:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.26.207:22-139.178.68.195:45640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:15.423030 containerd[1662]: time="2025-12-16T02:12:15.422988250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:12:15.787875 containerd[1662]: time="2025-12-16T02:12:15.787816865Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:12:15.789304 containerd[1662]: time="2025-12-16T02:12:15.789108989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:12:15.789304 containerd[1662]: time="2025-12-16T02:12:15.789150309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:12:15.789429 kubelet[2916]: E1216 02:12:15.789399 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:12:15.791008 kubelet[2916]: E1216 02:12:15.790977 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:12:15.791084 kubelet[2916]: E1216 02:12:15.791064 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-687df49b8c-8hkjd_calico-system(8f7f3b6e-a739-4bf5-a67c-595cf0d67494): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:12:15.791145 kubelet[2916]: E1216 02:12:15.791121 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:12:16.136262 sshd[5200]: Accepted publickey for core from 139.178.68.195 port 45640 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:16.135000 audit[5200]: USER_ACCT pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.140000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.145137 kernel: audit: type=1101 audit(1765851136.135:746): pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.145254 kernel: audit: type=1103 audit(1765851136.140:747): pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.142266 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:16.147290 kernel: audit: type=1006 audit(1765851136.140:748): pid=5200 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 02:12:16.140000 audit[5200]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3b061f0 a2=3 a3=0 items=0 ppid=1 pid=5200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:16.150647 kernel: audit: type=1300 audit(1765851136.140:748): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3b061f0 a2=3 a3=0 items=0 ppid=1 pid=5200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:16.140000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:16.152104 kernel: audit: type=1327 audit(1765851136.140:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:16.154437 systemd-logind[1644]: New session 11 of user core. Dec 16 02:12:16.161039 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 02:12:16.162000 audit[5200]: USER_START pid=5200 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.163000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.169313 kernel: audit: type=1105 audit(1765851136.162:749): pid=5200 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.169376 kernel: audit: type=1103 audit(1765851136.163:750): pid=5217 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.729994 sshd[5217]: Connection closed by 139.178.68.195 port 45640 Dec 16 02:12:16.730275 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:16.731000 audit[5200]: USER_END pid=5200 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.735252 systemd[1]: sshd@11-10.0.26.207:22-139.178.68.195:45640.service: Deactivated successfully. Dec 16 02:12:16.731000 audit[5200]: CRED_DISP pid=5200 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.737287 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 02:12:16.739295 kernel: audit: type=1106 audit(1765851136.731:751): pid=5200 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.739441 kernel: audit: type=1104 audit(1765851136.731:752): pid=5200 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:16.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.26.207:22-139.178.68.195:45640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:16.740141 systemd-logind[1644]: Session 11 logged out. Waiting for processes to exit. Dec 16 02:12:16.741159 systemd-logind[1644]: Removed session 11. Dec 16 02:12:17.422582 containerd[1662]: time="2025-12-16T02:12:17.422340411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:12:17.755040 containerd[1662]: time="2025-12-16T02:12:17.754738289Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:12:17.756488 containerd[1662]: time="2025-12-16T02:12:17.756437294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:12:17.756687 containerd[1662]: time="2025-12-16T02:12:17.756512694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:12:17.756719 kubelet[2916]: E1216 02:12:17.756666 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:12:17.756719 kubelet[2916]: E1216 02:12:17.756714 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:12:17.757030 kubelet[2916]: E1216 02:12:17.756780 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-78cff6db84-5gv4l_calico-system(1d880e3b-9ee9-4ee7-a578-c81cb365da0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:12:17.758587 containerd[1662]: time="2025-12-16T02:12:17.757871658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:12:18.084322 containerd[1662]: time="2025-12-16T02:12:18.084205438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:12:18.086209 containerd[1662]: time="2025-12-16T02:12:18.086056364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:12:18.086209 containerd[1662]: time="2025-12-16T02:12:18.086152804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:12:18.086427 kubelet[2916]: E1216 02:12:18.086347 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:12:18.086480 kubelet[2916]: E1216 02:12:18.086453 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:12:18.086578 kubelet[2916]: E1216 02:12:18.086555 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-78cff6db84-5gv4l_calico-system(1d880e3b-9ee9-4ee7-a578-c81cb365da0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:12:18.086921 kubelet[2916]: E1216 02:12:18.086876 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:12:18.422994 containerd[1662]: time="2025-12-16T02:12:18.422831134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:12:18.765799 containerd[1662]: time="2025-12-16T02:12:18.765665244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:12:18.767036 containerd[1662]: time="2025-12-16T02:12:18.766993288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:12:18.767110 containerd[1662]: time="2025-12-16T02:12:18.767032128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:12:18.767236 kubelet[2916]: E1216 02:12:18.767199 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:12:18.767512 kubelet[2916]: E1216 02:12:18.767250 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:12:18.767512 kubelet[2916]: E1216 02:12:18.767318 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cd794ff8d-86jwz_calico-apiserver(9fa7981b-06dd-4022-adb6-f277629bbdff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:12:18.767512 kubelet[2916]: E1216 02:12:18.767347 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:12:19.422651 containerd[1662]: time="2025-12-16T02:12:19.422610856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:12:19.759337 containerd[1662]: time="2025-12-16T02:12:19.759113186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:12:19.760714 containerd[1662]: time="2025-12-16T02:12:19.760665710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:12:19.761134 containerd[1662]: time="2025-12-16T02:12:19.760698110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:12:19.761183 kubelet[2916]: E1216 02:12:19.760888 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:12:19.761183 kubelet[2916]: E1216 02:12:19.760927 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:12:19.761183 kubelet[2916]: E1216 02:12:19.761120 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-b8vhz_calico-system(621fd2d4-81d9-4021-8cd8-39b0addbde62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:12:19.761266 kubelet[2916]: E1216 02:12:19.761161 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:12:19.761925 containerd[1662]: time="2025-12-16T02:12:19.761697313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:12:20.094330 containerd[1662]: time="2025-12-16T02:12:20.094202592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:12:20.095955 containerd[1662]: time="2025-12-16T02:12:20.095829916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:12:20.095955 containerd[1662]: time="2025-12-16T02:12:20.095890237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:12:20.096334 kubelet[2916]: E1216 02:12:20.096288 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:12:20.096722 kubelet[2916]: E1216 02:12:20.096341 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:12:20.096722 kubelet[2916]: E1216 02:12:20.096414 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:12:20.098058 containerd[1662]: time="2025-12-16T02:12:20.098023243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:12:20.437104 containerd[1662]: time="2025-12-16T02:12:20.436926460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:12:20.438136 containerd[1662]: time="2025-12-16T02:12:20.438006264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:12:20.438136 containerd[1662]: time="2025-12-16T02:12:20.438092584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:12:20.438322 kubelet[2916]: E1216 02:12:20.438274 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:12:20.438407 kubelet[2916]: E1216 02:12:20.438329 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:12:20.438546 kubelet[2916]: E1216 02:12:20.438520 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-sgqkd_calico-system(451fe39a-cf01-4312-91ac-f3d2c7b640b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:12:20.438595 kubelet[2916]: E1216 02:12:20.438564 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:12:20.438883 containerd[1662]: time="2025-12-16T02:12:20.438818466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:12:20.766599 containerd[1662]: time="2025-12-16T02:12:20.766451130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:12:20.768095 containerd[1662]: time="2025-12-16T02:12:20.768036374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:12:20.768185 containerd[1662]: time="2025-12-16T02:12:20.768127095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:12:20.768316 kubelet[2916]: E1216 02:12:20.768280 2916 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:12:20.768374 kubelet[2916]: E1216 02:12:20.768326 2916 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:12:20.768494 kubelet[2916]: E1216 02:12:20.768399 2916 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cd794ff8d-6hwpw_calico-apiserver(bf9a6289-a4bc-424f-93a6-cea4264ad5e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:12:20.768494 kubelet[2916]: E1216 02:12:20.768436 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:12:21.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.26.207:22-139.178.68.195:51230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:21.899786 systemd[1]: Started sshd@12-10.0.26.207:22-139.178.68.195:51230.service - OpenSSH per-connection server daemon (139.178.68.195:51230). Dec 16 02:12:21.901960 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:12:21.902042 kernel: audit: type=1130 audit(1765851141.899:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.26.207:22-139.178.68.195:51230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:22.734000 audit[5232]: USER_ACCT pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:22.737163 sshd[5232]: Accepted publickey for core from 139.178.68.195 port 51230 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:22.739623 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:22.737000 audit[5232]: CRED_ACQ pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:22.742719 kernel: audit: type=1101 audit(1765851142.734:755): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:22.742797 kernel: audit: type=1103 audit(1765851142.737:756): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:22.744603 kernel: audit: type=1006 audit(1765851142.738:757): pid=5232 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 02:12:22.738000 audit[5232]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2338350 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:22.749561 kernel: audit: type=1300 audit(1765851142.738:757): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2338350 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:22.738000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:22.750816 kernel: audit: type=1327 audit(1765851142.738:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:22.755094 systemd-logind[1644]: New session 12 of user core. Dec 16 02:12:22.762086 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 02:12:22.764000 audit[5232]: USER_START pid=5232 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:22.766000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:22.772390 kernel: audit: type=1105 audit(1765851142.764:758): pid=5232 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:22.772460 kernel: audit: type=1103 audit(1765851142.766:759): pid=5236 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:23.269076 sshd[5236]: Connection closed by 139.178.68.195 port 51230 Dec 16 02:12:23.269439 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:23.269000 audit[5232]: USER_END pid=5232 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:23.272896 systemd[1]: sshd@12-10.0.26.207:22-139.178.68.195:51230.service: Deactivated successfully. Dec 16 02:12:23.269000 audit[5232]: CRED_DISP pid=5232 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:23.275007 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 02:12:23.276998 systemd-logind[1644]: Session 12 logged out. Waiting for processes to exit. Dec 16 02:12:23.277494 kernel: audit: type=1106 audit(1765851143.269:760): pid=5232 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:23.277612 kernel: audit: type=1104 audit(1765851143.269:761): pid=5232 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:23.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.26.207:22-139.178.68.195:51230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:23.278032 systemd-logind[1644]: Removed session 12. Dec 16 02:12:23.408395 update_engine[1646]: I20251216 02:12:23.407960 1646 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 02:12:23.408395 update_engine[1646]: I20251216 02:12:23.408022 1646 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 02:12:23.408395 update_engine[1646]: I20251216 02:12:23.408263 1646 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 02:12:23.408773 update_engine[1646]: I20251216 02:12:23.408615 1646 omaha_request_params.cc:62] Current group set to alpha Dec 16 02:12:23.411292 update_engine[1646]: I20251216 02:12:23.411239 1646 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 02:12:23.411292 update_engine[1646]: I20251216 02:12:23.411276 1646 update_attempter.cc:643] Scheduling an action processor start. Dec 16 02:12:23.411292 update_engine[1646]: I20251216 02:12:23.411298 1646 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 02:12:23.411876 update_engine[1646]: I20251216 02:12:23.411511 1646 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 02:12:23.411876 update_engine[1646]: I20251216 02:12:23.411590 1646 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 02:12:23.411876 update_engine[1646]: I20251216 02:12:23.411598 1646 omaha_request_action.cc:272] Request: Dec 16 02:12:23.411876 update_engine[1646]: Dec 16 02:12:23.411876 update_engine[1646]: Dec 16 02:12:23.411876 update_engine[1646]: Dec 16 02:12:23.411876 update_engine[1646]: Dec 16 02:12:23.411876 update_engine[1646]: Dec 16 02:12:23.411876 update_engine[1646]: Dec 16 02:12:23.411876 update_engine[1646]: Dec 16 02:12:23.411876 update_engine[1646]: Dec 16 02:12:23.411876 update_engine[1646]: I20251216 02:12:23.411604 1646 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 02:12:23.412209 locksmithd[1705]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 02:12:23.414038 update_engine[1646]: I20251216 02:12:23.413378 1646 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 02:12:23.415238 update_engine[1646]: I20251216 02:12:23.415191 1646 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 02:12:23.423432 update_engine[1646]: E20251216 02:12:23.423365 1646 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 02:12:23.423554 update_engine[1646]: I20251216 02:12:23.423475 1646 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 02:12:23.438953 systemd[1]: Started sshd@13-10.0.26.207:22-139.178.68.195:51234.service - OpenSSH per-connection server daemon (139.178.68.195:51234). Dec 16 02:12:23.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.26.207:22-139.178.68.195:51234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:24.263000 audit[5252]: USER_ACCT pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:24.264909 sshd[5252]: Accepted publickey for core from 139.178.68.195 port 51234 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:24.264000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:24.265000 audit[5252]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff315c990 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:24.265000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:24.266596 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:24.271553 systemd-logind[1644]: New session 13 of user core. Dec 16 02:12:24.277072 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 02:12:24.279000 audit[5252]: USER_START pid=5252 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:24.280000 audit[5256]: CRED_ACQ pid=5256 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:24.828036 sshd[5256]: Connection closed by 139.178.68.195 port 51234 Dec 16 02:12:24.828573 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:24.829000 audit[5252]: USER_END pid=5252 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:24.829000 audit[5252]: CRED_DISP pid=5252 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:24.834156 systemd[1]: sshd@13-10.0.26.207:22-139.178.68.195:51234.service: Deactivated successfully. Dec 16 02:12:24.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.26.207:22-139.178.68.195:51234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:24.837114 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 02:12:24.839039 systemd-logind[1644]: Session 13 logged out. Waiting for processes to exit. Dec 16 02:12:24.840588 systemd-logind[1644]: Removed session 13. Dec 16 02:12:24.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.26.207:22-139.178.68.195:51242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:24.998312 systemd[1]: Started sshd@14-10.0.26.207:22-139.178.68.195:51242.service - OpenSSH per-connection server daemon (139.178.68.195:51242). Dec 16 02:12:25.828000 audit[5268]: USER_ACCT pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:25.829407 sshd[5268]: Accepted publickey for core from 139.178.68.195 port 51242 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:25.831000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:25.831000 audit[5268]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda938290 a2=3 a3=0 items=0 ppid=1 pid=5268 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:25.831000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:25.833043 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:25.840279 systemd-logind[1644]: New session 14 of user core. Dec 16 02:12:25.845085 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 02:12:25.849000 audit[5268]: USER_START pid=5268 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:25.851000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:26.377901 sshd[5272]: Connection closed by 139.178.68.195 port 51242 Dec 16 02:12:26.377693 sshd-session[5268]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:26.378000 audit[5268]: USER_END pid=5268 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:26.378000 audit[5268]: CRED_DISP pid=5268 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:26.381944 systemd-logind[1644]: Session 14 logged out. Waiting for processes to exit. Dec 16 02:12:26.382235 systemd[1]: sshd@14-10.0.26.207:22-139.178.68.195:51242.service: Deactivated successfully. Dec 16 02:12:26.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.26.207:22-139.178.68.195:51242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:26.385528 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 02:12:26.387826 systemd-logind[1644]: Removed session 14. Dec 16 02:12:26.426385 kubelet[2916]: E1216 02:12:26.426327 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:12:31.422412 kubelet[2916]: E1216 02:12:31.422305 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:12:31.424174 kubelet[2916]: E1216 02:12:31.424118 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:12:31.575438 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 02:12:31.575540 kernel: audit: type=1130 audit(1765851151.573:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.26.207:22-139.178.68.195:55182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:31.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.26.207:22-139.178.68.195:55182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:31.574309 systemd[1]: Started sshd@15-10.0.26.207:22-139.178.68.195:55182.service - OpenSSH per-connection server daemon (139.178.68.195:55182). Dec 16 02:12:32.423439 kubelet[2916]: E1216 02:12:32.423351 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:12:32.424965 kubelet[2916]: E1216 02:12:32.424849 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:12:32.461000 audit[5289]: USER_ACCT pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:32.463063 sshd[5289]: Accepted publickey for core from 139.178.68.195 port 55182 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:32.464000 audit[5289]: CRED_ACQ pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:32.466558 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:32.468428 kernel: audit: type=1101 audit(1765851152.461:782): pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:32.468488 kernel: audit: type=1103 audit(1765851152.464:783): pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:32.470366 kernel: audit: type=1006 audit(1765851152.464:784): pid=5289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 02:12:32.464000 audit[5289]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc78e7280 a2=3 a3=0 items=0 ppid=1 pid=5289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:32.473624 kernel: audit: type=1300 audit(1765851152.464:784): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc78e7280 a2=3 a3=0 items=0 ppid=1 pid=5289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:32.464000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:32.475553 kernel: audit: type=1327 audit(1765851152.464:784): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:32.477340 systemd-logind[1644]: New session 15 of user core. Dec 16 02:12:32.483056 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 02:12:32.485000 audit[5289]: USER_START pid=5289 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:32.489000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:32.493055 kernel: audit: type=1105 audit(1765851152.485:785): pid=5289 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:32.493109 kernel: audit: type=1103 audit(1765851152.489:786): pid=5293 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:33.053009 sshd[5293]: Connection closed by 139.178.68.195 port 55182 Dec 16 02:12:33.053338 sshd-session[5289]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:33.053000 audit[5289]: USER_END pid=5289 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:33.058963 systemd[1]: sshd@15-10.0.26.207:22-139.178.68.195:55182.service: Deactivated successfully. Dec 16 02:12:33.053000 audit[5289]: CRED_DISP pid=5289 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:33.060897 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 02:12:33.063349 kernel: audit: type=1106 audit(1765851153.053:787): pid=5289 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:33.063427 kernel: audit: type=1104 audit(1765851153.053:788): pid=5289 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:33.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.26.207:22-139.178.68.195:55182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:33.064095 systemd-logind[1644]: Session 15 logged out. Waiting for processes to exit. Dec 16 02:12:33.065919 systemd-logind[1644]: Removed session 15. Dec 16 02:12:33.408888 update_engine[1646]: I20251216 02:12:33.407912 1646 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 02:12:33.408888 update_engine[1646]: I20251216 02:12:33.408003 1646 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 02:12:33.408888 update_engine[1646]: I20251216 02:12:33.408344 1646 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 02:12:33.413958 update_engine[1646]: E20251216 02:12:33.413919 1646 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 02:12:33.414130 update_engine[1646]: I20251216 02:12:33.414109 1646 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 02:12:33.423114 kubelet[2916]: E1216 02:12:33.423058 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:12:38.208725 systemd[1]: Started sshd@16-10.0.26.207:22-139.178.68.195:55192.service - OpenSSH per-connection server daemon (139.178.68.195:55192). Dec 16 02:12:38.209056 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:12:38.209098 kernel: audit: type=1130 audit(1765851158.207:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.26.207:22-139.178.68.195:55192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:38.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.26.207:22-139.178.68.195:55192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:39.035000 audit[5310]: USER_ACCT pid=5310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.037075 sshd[5310]: Accepted publickey for core from 139.178.68.195 port 55192 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:39.039000 audit[5310]: CRED_ACQ pid=5310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.041478 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:39.043572 kernel: audit: type=1101 audit(1765851159.035:791): pid=5310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.043619 kernel: audit: type=1103 audit(1765851159.039:792): pid=5310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.045591 kernel: audit: type=1006 audit(1765851159.039:793): pid=5310 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 02:12:39.039000 audit[5310]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd30020a0 a2=3 a3=0 items=0 ppid=1 pid=5310 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:39.049068 kernel: audit: type=1300 audit(1765851159.039:793): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd30020a0 a2=3 a3=0 items=0 ppid=1 pid=5310 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:39.049217 kernel: audit: type=1327 audit(1765851159.039:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:39.039000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:39.052889 systemd-logind[1644]: New session 16 of user core. Dec 16 02:12:39.062083 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 02:12:39.065000 audit[5310]: USER_START pid=5310 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.071120 kernel: audit: type=1105 audit(1765851159.065:794): pid=5310 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.071198 kernel: audit: type=1103 audit(1765851159.069:795): pid=5314 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.069000 audit[5314]: CRED_ACQ pid=5314 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.570768 sshd[5314]: Connection closed by 139.178.68.195 port 55192 Dec 16 02:12:39.571763 sshd-session[5310]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:39.572000 audit[5310]: USER_END pid=5310 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.576439 systemd[1]: sshd@16-10.0.26.207:22-139.178.68.195:55192.service: Deactivated successfully. Dec 16 02:12:39.572000 audit[5310]: CRED_DISP pid=5310 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.578541 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 02:12:39.580892 systemd-logind[1644]: Session 16 logged out. Waiting for processes to exit. Dec 16 02:12:39.581331 kernel: audit: type=1106 audit(1765851159.572:796): pid=5310 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.581389 kernel: audit: type=1104 audit(1765851159.572:797): pid=5310 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:39.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.26.207:22-139.178.68.195:55192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:39.582734 systemd-logind[1644]: Removed session 16. Dec 16 02:12:40.422666 kubelet[2916]: E1216 02:12:40.422607 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:12:43.410259 update_engine[1646]: I20251216 02:12:43.410008 1646 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 02:12:43.410259 update_engine[1646]: I20251216 02:12:43.410091 1646 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 02:12:43.410684 update_engine[1646]: I20251216 02:12:43.410472 1646 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 02:12:43.416309 update_engine[1646]: E20251216 02:12:43.416262 1646 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 02:12:43.416373 update_engine[1646]: I20251216 02:12:43.416339 1646 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 02:12:44.423420 kubelet[2916]: E1216 02:12:44.423361 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:12:44.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.26.207:22-139.178.68.195:50864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:44.743652 systemd[1]: Started sshd@17-10.0.26.207:22-139.178.68.195:50864.service - OpenSSH per-connection server daemon (139.178.68.195:50864). Dec 16 02:12:44.744427 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:12:44.744451 kernel: audit: type=1130 audit(1765851164.742:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.26.207:22-139.178.68.195:50864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:45.424533 kubelet[2916]: E1216 02:12:45.424476 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:12:45.561000 audit[5352]: USER_ACCT pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:45.563067 sshd[5352]: Accepted publickey for core from 139.178.68.195 port 50864 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:45.564000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:45.566555 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:45.568802 kernel: audit: type=1101 audit(1765851165.561:800): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:45.568876 kernel: audit: type=1103 audit(1765851165.564:801): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:45.568898 kernel: audit: type=1006 audit(1765851165.564:802): pid=5352 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 02:12:45.564000 audit[5352]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc65b9eb0 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:45.574057 kernel: audit: type=1300 audit(1765851165.564:802): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc65b9eb0 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:45.564000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:45.575555 kernel: audit: type=1327 audit(1765851165.564:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:45.576609 systemd-logind[1644]: New session 17 of user core. Dec 16 02:12:45.587065 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 02:12:45.588000 audit[5352]: USER_START pid=5352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:45.592000 audit[5356]: CRED_ACQ pid=5356 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:45.596563 kernel: audit: type=1105 audit(1765851165.588:803): pid=5352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:45.596649 kernel: audit: type=1103 audit(1765851165.592:804): pid=5356 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:46.097999 sshd[5356]: Connection closed by 139.178.68.195 port 50864 Dec 16 02:12:46.098561 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:46.100000 audit[5352]: USER_END pid=5352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:46.105920 systemd-logind[1644]: Session 17 logged out. Waiting for processes to exit. Dec 16 02:12:46.106368 systemd[1]: sshd@17-10.0.26.207:22-139.178.68.195:50864.service: Deactivated successfully. Dec 16 02:12:46.107886 kernel: audit: type=1106 audit(1765851166.100:805): pid=5352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:46.107954 kernel: audit: type=1104 audit(1765851166.100:806): pid=5352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:46.100000 audit[5352]: CRED_DISP pid=5352 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:46.109643 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 02:12:46.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.26.207:22-139.178.68.195:50864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:46.116351 systemd-logind[1644]: Removed session 17. Dec 16 02:12:46.266503 systemd[1]: Started sshd@18-10.0.26.207:22-139.178.68.195:50866.service - OpenSSH per-connection server daemon (139.178.68.195:50866). Dec 16 02:12:46.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.26.207:22-139.178.68.195:50866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:46.426130 kubelet[2916]: E1216 02:12:46.426078 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:12:46.427090 kubelet[2916]: E1216 02:12:46.427050 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:12:47.091000 audit[5369]: USER_ACCT pid=5369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:47.092166 sshd[5369]: Accepted publickey for core from 139.178.68.195 port 50866 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:47.092000 audit[5369]: CRED_ACQ pid=5369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:47.092000 audit[5369]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc084a6f0 a2=3 a3=0 items=0 ppid=1 pid=5369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:47.092000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:47.093800 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:47.098634 systemd-logind[1644]: New session 18 of user core. Dec 16 02:12:47.115118 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 02:12:47.121000 audit[5369]: USER_START pid=5369 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:47.123000 audit[5373]: CRED_ACQ pid=5373 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:47.425171 kubelet[2916]: E1216 02:12:47.425104 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:12:47.685694 sshd[5373]: Connection closed by 139.178.68.195 port 50866 Dec 16 02:12:47.686018 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:47.687000 audit[5369]: USER_END pid=5369 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:47.688000 audit[5369]: CRED_DISP pid=5369 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:47.692600 systemd[1]: sshd@18-10.0.26.207:22-139.178.68.195:50866.service: Deactivated successfully. Dec 16 02:12:47.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.26.207:22-139.178.68.195:50866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:47.696562 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 02:12:47.701203 systemd-logind[1644]: Session 18 logged out. Waiting for processes to exit. Dec 16 02:12:47.704930 systemd-logind[1644]: Removed session 18. Dec 16 02:12:47.853562 systemd[1]: Started sshd@19-10.0.26.207:22-139.178.68.195:50870.service - OpenSSH per-connection server daemon (139.178.68.195:50870). Dec 16 02:12:47.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.26.207:22-139.178.68.195:50870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:48.671000 audit[5385]: USER_ACCT pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:48.672477 sshd[5385]: Accepted publickey for core from 139.178.68.195 port 50870 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:48.672000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:48.672000 audit[5385]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcdff25d0 a2=3 a3=0 items=0 ppid=1 pid=5385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:48.672000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:48.674265 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:48.678528 systemd-logind[1644]: New session 19 of user core. Dec 16 02:12:48.687052 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 02:12:48.688000 audit[5385]: USER_START pid=5385 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:48.690000 audit[5389]: CRED_ACQ pid=5389 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:49.601000 audit[5401]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:12:49.601000 audit[5401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff2527b30 a2=0 a3=1 items=0 ppid=3054 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:49.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:12:49.607000 audit[5401]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:12:49.607000 audit[5401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff2527b30 a2=0 a3=1 items=0 ppid=3054 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:49.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:12:49.760116 sshd[5389]: Connection closed by 139.178.68.195 port 50870 Dec 16 02:12:49.761952 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:49.766955 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 16 02:12:49.767017 kernel: audit: type=1106 audit(1765851169.762:825): pid=5385 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:49.762000 audit[5385]: USER_END pid=5385 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:49.765845 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 02:12:49.767175 systemd[1]: sshd@19-10.0.26.207:22-139.178.68.195:50870.service: Deactivated successfully. Dec 16 02:12:49.762000 audit[5385]: CRED_DISP pid=5385 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:49.772888 kernel: audit: type=1104 audit(1765851169.762:826): pid=5385 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:49.772961 kernel: audit: type=1131 audit(1765851169.766:827): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.26.207:22-139.178.68.195:50870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:49.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.26.207:22-139.178.68.195:50870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:49.772983 systemd-logind[1644]: Session 19 logged out. Waiting for processes to exit. Dec 16 02:12:49.774122 systemd-logind[1644]: Removed session 19. Dec 16 02:12:49.926847 systemd[1]: Started sshd@20-10.0.26.207:22-139.178.68.195:50876.service - OpenSSH per-connection server daemon (139.178.68.195:50876). Dec 16 02:12:49.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.26.207:22-139.178.68.195:50876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:49.930895 kernel: audit: type=1130 audit(1765851169.926:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.26.207:22-139.178.68.195:50876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:50.622000 audit[5410]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:12:50.622000 audit[5410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffe951d30 a2=0 a3=1 items=0 ppid=3054 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:50.630518 kernel: audit: type=1325 audit(1765851170.622:829): table=filter:144 family=2 entries=26 op=nft_register_rule pid=5410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:12:50.630589 kernel: audit: type=1300 audit(1765851170.622:829): arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffe951d30 a2=0 a3=1 items=0 ppid=3054 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:50.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:12:50.632426 kernel: audit: type=1327 audit(1765851170.622:829): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:12:50.632000 audit[5410]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:12:50.632000 audit[5410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffe951d30 a2=0 a3=1 items=0 ppid=3054 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:50.638920 kernel: audit: type=1325 audit(1765851170.632:830): table=nat:145 family=2 entries=20 op=nft_register_rule pid=5410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:12:50.639933 kernel: audit: type=1300 audit(1765851170.632:830): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffe951d30 a2=0 a3=1 items=0 ppid=3054 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:50.639969 kernel: audit: type=1327 audit(1765851170.632:830): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:12:50.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:12:50.750000 audit[5406]: USER_ACCT pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:50.751091 sshd[5406]: Accepted publickey for core from 139.178.68.195 port 50876 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:50.751000 audit[5406]: CRED_ACQ pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:50.751000 audit[5406]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc0a4d870 a2=3 a3=0 items=0 ppid=1 pid=5406 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:50.751000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:50.753002 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:50.757030 systemd-logind[1644]: New session 20 of user core. Dec 16 02:12:50.767046 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 02:12:50.769000 audit[5406]: USER_START pid=5406 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:50.770000 audit[5412]: CRED_ACQ pid=5412 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:51.429711 sshd[5412]: Connection closed by 139.178.68.195 port 50876 Dec 16 02:12:51.429025 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:51.429000 audit[5406]: USER_END pid=5406 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:51.429000 audit[5406]: CRED_DISP pid=5406 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:51.433179 systemd[1]: sshd@20-10.0.26.207:22-139.178.68.195:50876.service: Deactivated successfully. Dec 16 02:12:51.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.26.207:22-139.178.68.195:50876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:51.436368 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 02:12:51.438642 systemd-logind[1644]: Session 20 logged out. Waiting for processes to exit. Dec 16 02:12:51.439520 systemd-logind[1644]: Removed session 20. Dec 16 02:12:51.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.26.207:22-139.178.68.195:52040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:51.634161 systemd[1]: Started sshd@21-10.0.26.207:22-139.178.68.195:52040.service - OpenSSH per-connection server daemon (139.178.68.195:52040). Dec 16 02:12:52.536000 audit[5424]: USER_ACCT pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:52.538150 sshd[5424]: Accepted publickey for core from 139.178.68.195 port 52040 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:52.538000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:52.538000 audit[5424]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd4208ad0 a2=3 a3=0 items=0 ppid=1 pid=5424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:52.538000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:52.539715 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:52.544712 systemd-logind[1644]: New session 21 of user core. Dec 16 02:12:52.556230 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 02:12:52.557000 audit[5424]: USER_START pid=5424 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:52.559000 audit[5428]: CRED_ACQ pid=5428 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:53.132349 sshd[5428]: Connection closed by 139.178.68.195 port 52040 Dec 16 02:12:53.132763 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:53.133000 audit[5424]: USER_END pid=5424 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:53.133000 audit[5424]: CRED_DISP pid=5424 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:53.136790 systemd[1]: sshd@21-10.0.26.207:22-139.178.68.195:52040.service: Deactivated successfully. Dec 16 02:12:53.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.26.207:22-139.178.68.195:52040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:53.138747 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 02:12:53.139545 systemd-logind[1644]: Session 21 logged out. Waiting for processes to exit. Dec 16 02:12:53.140525 systemd-logind[1644]: Removed session 21. Dec 16 02:12:53.408741 update_engine[1646]: I20251216 02:12:53.408195 1646 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 02:12:53.408741 update_engine[1646]: I20251216 02:12:53.408344 1646 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 02:12:53.409098 update_engine[1646]: I20251216 02:12:53.408962 1646 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 02:12:53.414576 update_engine[1646]: E20251216 02:12:53.414516 1646 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 02:12:53.414676 update_engine[1646]: I20251216 02:12:53.414605 1646 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 02:12:53.414676 update_engine[1646]: I20251216 02:12:53.414616 1646 omaha_request_action.cc:617] Omaha request response: Dec 16 02:12:53.414724 update_engine[1646]: E20251216 02:12:53.414695 1646 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 02:12:53.414724 update_engine[1646]: I20251216 02:12:53.414713 1646 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 02:12:53.414724 update_engine[1646]: I20251216 02:12:53.414717 1646 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 02:12:53.414724 update_engine[1646]: I20251216 02:12:53.414721 1646 update_attempter.cc:306] Processing Done. Dec 16 02:12:53.414801 update_engine[1646]: E20251216 02:12:53.414736 1646 update_attempter.cc:619] Update failed. Dec 16 02:12:53.414801 update_engine[1646]: I20251216 02:12:53.414741 1646 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 02:12:53.414801 update_engine[1646]: I20251216 02:12:53.414744 1646 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 02:12:53.414801 update_engine[1646]: I20251216 02:12:53.414750 1646 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 02:12:53.415184 update_engine[1646]: I20251216 02:12:53.415129 1646 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 02:12:53.415184 update_engine[1646]: I20251216 02:12:53.415169 1646 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 02:12:53.415184 update_engine[1646]: I20251216 02:12:53.415176 1646 omaha_request_action.cc:272] Request: Dec 16 02:12:53.415184 update_engine[1646]: Dec 16 02:12:53.415184 update_engine[1646]: Dec 16 02:12:53.415184 update_engine[1646]: Dec 16 02:12:53.415184 update_engine[1646]: Dec 16 02:12:53.415184 update_engine[1646]: Dec 16 02:12:53.415184 update_engine[1646]: Dec 16 02:12:53.415184 update_engine[1646]: I20251216 02:12:53.415182 1646 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 02:12:53.415444 update_engine[1646]: I20251216 02:12:53.415205 1646 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 02:12:53.415474 locksmithd[1705]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 02:12:53.415696 update_engine[1646]: I20251216 02:12:53.415490 1646 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 02:12:53.423161 update_engine[1646]: E20251216 02:12:53.423106 1646 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 02:12:53.423239 update_engine[1646]: I20251216 02:12:53.423187 1646 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 02:12:53.423239 update_engine[1646]: I20251216 02:12:53.423197 1646 omaha_request_action.cc:617] Omaha request response: Dec 16 02:12:53.423239 update_engine[1646]: I20251216 02:12:53.423204 1646 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 02:12:53.423239 update_engine[1646]: I20251216 02:12:53.423209 1646 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 02:12:53.423239 update_engine[1646]: I20251216 02:12:53.423214 1646 update_attempter.cc:306] Processing Done. Dec 16 02:12:53.423239 update_engine[1646]: I20251216 02:12:53.423219 1646 update_attempter.cc:310] Error event sent. Dec 16 02:12:53.423239 update_engine[1646]: I20251216 02:12:53.423228 1646 update_check_scheduler.cc:74] Next update check in 45m1s Dec 16 02:12:53.423727 locksmithd[1705]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 02:12:53.744000 audit[5442]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5442 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:12:53.744000 audit[5442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe351ee00 a2=0 a3=1 items=0 ppid=3054 pid=5442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:53.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:12:53.751000 audit[5442]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=5442 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:12:53.751000 audit[5442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe351ee00 a2=0 a3=1 items=0 ppid=3054 pid=5442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:53.751000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:12:54.423332 kubelet[2916]: E1216 02:12:54.422977 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:12:57.423589 kubelet[2916]: E1216 02:12:57.423537 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:12:58.288776 systemd[1]: Started sshd@22-10.0.26.207:22-139.178.68.195:52050.service - OpenSSH per-connection server daemon (139.178.68.195:52050). Dec 16 02:12:58.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.26.207:22-139.178.68.195:52050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:58.292147 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 02:12:58.292270 kernel: audit: type=1130 audit(1765851178.288:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.26.207:22-139.178.68.195:52050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:59.124000 audit[5444]: USER_ACCT pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.125789 sshd[5444]: Accepted publickey for core from 139.178.68.195 port 52050 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:12:59.128000 audit[5444]: CRED_ACQ pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.130316 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:12:59.132302 kernel: audit: type=1101 audit(1765851179.124:851): pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.132386 kernel: audit: type=1103 audit(1765851179.128:852): pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.134262 kernel: audit: type=1006 audit(1765851179.128:853): pid=5444 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 02:12:59.128000 audit[5444]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc1b297b0 a2=3 a3=0 items=0 ppid=1 pid=5444 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:59.138084 kernel: audit: type=1300 audit(1765851179.128:853): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc1b297b0 a2=3 a3=0 items=0 ppid=1 pid=5444 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:12:59.138176 kernel: audit: type=1327 audit(1765851179.128:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:59.128000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:12:59.141419 systemd-logind[1644]: New session 22 of user core. Dec 16 02:12:59.147650 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 02:12:59.150000 audit[5444]: USER_START pid=5444 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.152000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.158647 kernel: audit: type=1105 audit(1765851179.150:854): pid=5444 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.158734 kernel: audit: type=1103 audit(1765851179.152:855): pid=5450 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.424443 kubelet[2916]: E1216 02:12:59.424390 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:12:59.428125 kubelet[2916]: E1216 02:12:59.428081 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:12:59.686072 sshd[5450]: Connection closed by 139.178.68.195 port 52050 Dec 16 02:12:59.686429 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Dec 16 02:12:59.687000 audit[5444]: USER_END pid=5444 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.691027 systemd[1]: sshd@22-10.0.26.207:22-139.178.68.195:52050.service: Deactivated successfully. Dec 16 02:12:59.687000 audit[5444]: CRED_DISP pid=5444 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.693978 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 02:12:59.694821 kernel: audit: type=1106 audit(1765851179.687:856): pid=5444 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.695460 kernel: audit: type=1104 audit(1765851179.687:857): pid=5444 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:12:59.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.26.207:22-139.178.68.195:52050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:12:59.695050 systemd-logind[1644]: Session 22 logged out. Waiting for processes to exit. Dec 16 02:12:59.696213 systemd-logind[1644]: Removed session 22. Dec 16 02:13:00.424334 kubelet[2916]: E1216 02:13:00.424154 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:13:01.422968 kubelet[2916]: E1216 02:13:01.422918 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:13:04.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.26.207:22-139.178.68.195:34912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:13:04.859823 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:13:04.859911 kernel: audit: type=1130 audit(1765851184.857:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.26.207:22-139.178.68.195:34912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:13:04.858684 systemd[1]: Started sshd@23-10.0.26.207:22-139.178.68.195:34912.service - OpenSSH per-connection server daemon (139.178.68.195:34912). Dec 16 02:13:05.422024 kubelet[2916]: E1216 02:13:05.421967 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:13:05.704165 sshd[5465]: Accepted publickey for core from 139.178.68.195 port 34912 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:13:05.702000 audit[5465]: USER_ACCT pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:05.707000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:05.710345 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:13:05.711222 kernel: audit: type=1101 audit(1765851185.702:860): pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:05.711283 kernel: audit: type=1103 audit(1765851185.707:861): pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:05.711306 kernel: audit: type=1006 audit(1765851185.707:862): pid=5465 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 02:13:05.707000 audit[5465]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd649bd0 a2=3 a3=0 items=0 ppid=1 pid=5465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:13:05.717632 kernel: audit: type=1300 audit(1765851185.707:862): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd649bd0 a2=3 a3=0 items=0 ppid=1 pid=5465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:13:05.717704 kernel: audit: type=1327 audit(1765851185.707:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:13:05.707000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:13:05.719147 systemd-logind[1644]: New session 23 of user core. Dec 16 02:13:05.732108 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 02:13:05.733000 audit[5465]: USER_START pid=5465 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:05.736000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:05.741386 kernel: audit: type=1105 audit(1765851185.733:863): pid=5465 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:05.741458 kernel: audit: type=1103 audit(1765851185.736:864): pid=5469 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:06.245971 sshd[5469]: Connection closed by 139.178.68.195 port 34912 Dec 16 02:13:06.246669 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Dec 16 02:13:06.247000 audit[5465]: USER_END pid=5465 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:06.250832 systemd[1]: sshd@23-10.0.26.207:22-139.178.68.195:34912.service: Deactivated successfully. Dec 16 02:13:06.247000 audit[5465]: CRED_DISP pid=5465 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:06.252902 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 02:13:06.254684 kernel: audit: type=1106 audit(1765851186.247:865): pid=5465 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:06.254739 kernel: audit: type=1104 audit(1765851186.247:866): pid=5465 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:06.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.26.207:22-139.178.68.195:34912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:13:06.255537 systemd-logind[1644]: Session 23 logged out. Waiting for processes to exit. Dec 16 02:13:06.256295 systemd-logind[1644]: Removed session 23. Dec 16 02:13:08.426348 kubelet[2916]: E1216 02:13:08.426299 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:13:10.424683 kubelet[2916]: E1216 02:13:10.424633 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:13:11.415171 systemd[1]: Started sshd@24-10.0.26.207:22-139.178.68.195:59346.service - OpenSSH per-connection server daemon (139.178.68.195:59346). Dec 16 02:13:11.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.26.207:22-139.178.68.195:59346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:13:11.418522 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:13:11.418620 kernel: audit: type=1130 audit(1765851191.414:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.26.207:22-139.178.68.195:59346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:13:11.423414 kubelet[2916]: E1216 02:13:11.423362 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:13:12.256000 audit[5482]: USER_ACCT pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.258483 sshd[5482]: Accepted publickey for core from 139.178.68.195 port 59346 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:13:12.260000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.262604 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:13:12.264802 kernel: audit: type=1101 audit(1765851192.256:869): pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.264882 kernel: audit: type=1103 audit(1765851192.260:870): pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.264910 kernel: audit: type=1006 audit(1765851192.261:871): pid=5482 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 02:13:12.261000 audit[5482]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4c25670 a2=3 a3=0 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:13:12.268766 systemd-logind[1644]: New session 24 of user core. Dec 16 02:13:12.269868 kernel: audit: type=1300 audit(1765851192.261:871): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4c25670 a2=3 a3=0 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:13:12.261000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:13:12.271045 kernel: audit: type=1327 audit(1765851192.261:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:13:12.279056 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 02:13:12.281000 audit[5482]: USER_START pid=5482 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.283000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.288708 kernel: audit: type=1105 audit(1765851192.281:872): pid=5482 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.288744 kernel: audit: type=1103 audit(1765851192.283:873): pid=5486 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.423196 kubelet[2916]: E1216 02:13:12.423105 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:13:12.814702 sshd[5486]: Connection closed by 139.178.68.195 port 59346 Dec 16 02:13:12.814597 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Dec 16 02:13:12.815000 audit[5482]: USER_END pid=5482 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.818728 systemd-logind[1644]: Session 24 logged out. Waiting for processes to exit. Dec 16 02:13:12.819011 systemd[1]: sshd@24-10.0.26.207:22-139.178.68.195:59346.service: Deactivated successfully. Dec 16 02:13:12.820991 kernel: audit: type=1106 audit(1765851192.815:874): pid=5482 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.821064 kernel: audit: type=1104 audit(1765851192.815:875): pid=5482 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.815000 audit[5482]: CRED_DISP pid=5482 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:12.820826 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 02:13:12.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.26.207:22-139.178.68.195:59346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:13:12.824947 systemd-logind[1644]: Removed session 24. Dec 16 02:13:16.422713 kubelet[2916]: E1216 02:13:16.422117 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:13:17.999423 systemd[1]: Started sshd@25-10.0.26.207:22-139.178.68.195:59352.service - OpenSSH per-connection server daemon (139.178.68.195:59352). Dec 16 02:13:17.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.26.207:22-139.178.68.195:59352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:13:18.000919 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:13:18.000982 kernel: audit: type=1130 audit(1765851197.999:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.26.207:22-139.178.68.195:59352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:13:18.857000 audit[5526]: USER_ACCT pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:18.862473 sshd[5526]: Accepted publickey for core from 139.178.68.195 port 59352 ssh2: RSA SHA256:yQ8wjpbqRsK2jlak/DV2gBCMRlEhjWeH7wjqCPmVImE Dec 16 02:13:18.861000 audit[5526]: CRED_ACQ pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:18.865897 kernel: audit: type=1101 audit(1765851198.857:878): pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:18.865965 kernel: audit: type=1103 audit(1765851198.861:879): pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:18.863771 sshd-session[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:13:18.868090 kernel: audit: type=1006 audit(1765851198.861:880): pid=5526 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 02:13:18.861000 audit[5526]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd6c1500 a2=3 a3=0 items=0 ppid=1 pid=5526 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:13:18.861000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:13:18.873761 kernel: audit: type=1300 audit(1765851198.861:880): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd6c1500 a2=3 a3=0 items=0 ppid=1 pid=5526 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:13:18.873834 kernel: audit: type=1327 audit(1765851198.861:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:13:18.877653 systemd-logind[1644]: New session 25 of user core. Dec 16 02:13:18.887063 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 02:13:18.888000 audit[5526]: USER_START pid=5526 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:18.890000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:18.895924 kernel: audit: type=1105 audit(1765851198.888:881): pid=5526 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:18.895998 kernel: audit: type=1103 audit(1765851198.890:882): pid=5530 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:19.403023 sshd[5530]: Connection closed by 139.178.68.195 port 59352 Dec 16 02:13:19.403602 sshd-session[5526]: pam_unix(sshd:session): session closed for user core Dec 16 02:13:19.404000 audit[5526]: USER_END pid=5526 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:19.410250 systemd[1]: sshd@25-10.0.26.207:22-139.178.68.195:59352.service: Deactivated successfully. Dec 16 02:13:19.405000 audit[5526]: CRED_DISP pid=5526 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:19.413512 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 02:13:19.416325 kernel: audit: type=1106 audit(1765851199.404:883): pid=5526 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:19.416928 kernel: audit: type=1104 audit(1765851199.405:884): pid=5526 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 02:13:19.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.26.207:22-139.178.68.195:59352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:13:19.420720 systemd-logind[1644]: Session 25 logged out. Waiting for processes to exit. Dec 16 02:13:19.421977 systemd-logind[1644]: Removed session 25. Dec 16 02:13:19.422386 kubelet[2916]: E1216 02:13:19.422315 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:13:22.425611 kubelet[2916]: E1216 02:13:22.425547 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cff6db84-5gv4l" podUID="1d880e3b-9ee9-4ee7-a578-c81cb365da0d" Dec 16 02:13:23.423123 kubelet[2916]: E1216 02:13:23.423071 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-b8vhz" podUID="621fd2d4-81d9-4021-8cd8-39b0addbde62" Dec 16 02:13:23.423904 kubelet[2916]: E1216 02:13:23.423811 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgqkd" podUID="451fe39a-cf01-4312-91ac-f3d2c7b640b1" Dec 16 02:13:24.425023 kubelet[2916]: E1216 02:13:24.424968 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-6hwpw" podUID="bf9a6289-a4bc-424f-93a6-cea4264ad5e4" Dec 16 02:13:29.422824 kubelet[2916]: E1216 02:13:29.422772 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd794ff8d-86jwz" podUID="9fa7981b-06dd-4022-adb6-f277629bbdff" Dec 16 02:13:32.424373 kubelet[2916]: E1216 02:13:32.424250 2916 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-687df49b8c-8hkjd" podUID="8f7f3b6e-a739-4bf5-a67c-595cf0d67494" Dec 16 02:13:34.948896 kernel: pcieport 0000:00:01.0: pciehp: Slot(0): Button press: will power off in 5 sec