Aug 13 00:51:47.033538 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:51:47.033570 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:47.033585 kernel: BIOS-provided physical RAM map: Aug 13 00:51:47.033596 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:51:47.034639 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 13 00:51:47.034656 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 13 00:51:47.034673 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Aug 13 00:51:47.034685 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 13 00:51:47.034696 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 13 00:51:47.034706 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 13 00:51:47.034717 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 13 00:51:47.034728 kernel: printk: bootconsole [earlyser0] enabled Aug 13 00:51:47.034739 kernel: NX (Execute Disable) protection: active Aug 13 00:51:47.034751 kernel: efi: EFI v2.70 by Microsoft Aug 13 00:51:47.034767 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Aug 13 00:51:47.034779 kernel: random: crng init done Aug 13 00:51:47.034791 kernel: SMBIOS 3.1.0 present. Aug 13 00:51:47.034802 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 13 00:51:47.034814 kernel: Hypervisor detected: Microsoft Hyper-V Aug 13 00:51:47.034827 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 13 00:51:47.034838 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 Aug 13 00:51:47.034849 kernel: Hyper-V: Nested features: 0x1e0101 Aug 13 00:51:47.034863 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 13 00:51:47.034875 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 13 00:51:47.034887 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 00:51:47.034899 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 13 00:51:47.034912 kernel: tsc: Detected 2593.906 MHz processor Aug 13 00:51:47.034924 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:51:47.034937 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:51:47.034949 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 13 00:51:47.034961 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:51:47.034973 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 13 00:51:47.034987 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 13 00:51:47.034999 kernel: Using GB pages for direct mapping Aug 13 00:51:47.035011 kernel: Secure boot disabled Aug 13 00:51:47.035022 kernel: ACPI: Early table checksum verification disabled Aug 13 00:51:47.035034 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 13 00:51:47.035046 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:47.035058 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:47.035071 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 13 00:51:47.035090 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 13 00:51:47.035103 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:47.035116 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:47.035129 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:47.035142 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:47.035155 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:47.035171 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:47.035184 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:47.035197 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 13 00:51:47.035210 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 13 00:51:47.035222 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 13 00:51:47.035235 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 13 00:51:47.035248 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 13 00:51:47.035261 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 13 00:51:47.035280 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 13 00:51:47.035294 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 13 00:51:47.035307 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 13 00:51:47.035320 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 13 00:51:47.035332 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:51:47.035345 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 00:51:47.035358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 13 00:51:47.035371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 13 00:51:47.035384 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 13 00:51:47.035399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 13 00:51:47.035413 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 13 00:51:47.035426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 13 00:51:47.035438 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 13 00:51:47.035451 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 13 00:51:47.035464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 13 00:51:47.035477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 13 00:51:47.035490 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 13 00:51:47.035503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 13 00:51:47.035519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 13 00:51:47.035533 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 13 00:51:47.035546 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 13 00:51:47.035559 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 13 00:51:47.035572 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 13 00:51:47.035585 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 13 00:51:47.035598 kernel: Zone ranges: Aug 13 00:51:47.035620 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:51:47.035632 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:51:47.035648 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:51:47.035661 kernel: Movable zone start for each node Aug 13 00:51:47.035674 kernel: Early memory node ranges Aug 13 00:51:47.035686 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:51:47.035699 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 13 00:51:47.035712 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 13 00:51:47.035725 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:51:47.035737 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 13 00:51:47.035750 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:51:47.035765 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:51:47.035779 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 13 00:51:47.035792 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 13 00:51:47.035805 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 13 00:51:47.035818 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:51:47.035831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:51:47.035843 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:51:47.035856 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 13 00:51:47.035869 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:51:47.035885 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 13 00:51:47.035897 kernel: Booting paravirtualized kernel on Hyper-V Aug 13 00:51:47.035910 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:51:47.035924 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:51:47.035937 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Aug 13 00:51:47.035950 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Aug 13 00:51:47.035962 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:51:47.035975 kernel: Hyper-V: PV spinlocks enabled Aug 13 00:51:47.035987 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:51:47.036003 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 13 00:51:47.036016 kernel: Policy zone: Normal Aug 13 00:51:47.036031 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:47.036045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:51:47.036057 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 00:51:47.036070 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:51:47.036083 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:51:47.036096 kernel: Memory: 8079144K/8387460K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 308056K reserved, 0K cma-reserved) Aug 13 00:51:47.036112 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:51:47.036125 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:51:47.036148 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:51:47.036164 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:51:47.036178 kernel: rcu: RCU event tracing is enabled. Aug 13 00:51:47.036192 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:51:47.036206 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:51:47.036219 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:51:47.036233 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:51:47.036246 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:51:47.036260 kernel: Using NULL legacy PIC Aug 13 00:51:47.036277 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 13 00:51:47.036291 kernel: Console: colour dummy device 80x25 Aug 13 00:51:47.036304 kernel: printk: console [tty1] enabled Aug 13 00:51:47.036318 kernel: printk: console [ttyS0] enabled Aug 13 00:51:47.036332 kernel: printk: bootconsole [earlyser0] disabled Aug 13 00:51:47.036347 kernel: ACPI: Core revision 20210730 Aug 13 00:51:47.036361 kernel: Failed to register legacy timer interrupt Aug 13 00:51:47.036375 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:51:47.036388 kernel: Hyper-V: Using IPI hypercalls Aug 13 00:51:47.036402 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Aug 13 00:51:47.036416 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 00:51:47.036430 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 00:51:47.036443 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:51:47.036457 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:51:47.036471 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:51:47.036487 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 00:51:47.036501 kernel: RETBleed: Vulnerable Aug 13 00:51:47.036514 kernel: Speculative Store Bypass: Vulnerable Aug 13 00:51:47.036527 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:51:47.036541 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:51:47.036554 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:51:47.036567 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:51:47.036580 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:51:47.036594 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:51:47.039923 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 00:51:47.039947 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 00:51:47.039956 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 00:51:47.039964 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:51:47.039974 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 13 00:51:47.039981 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 13 00:51:47.039992 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 13 00:51:47.040001 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 13 00:51:47.040010 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:51:47.040021 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:51:47.040030 kernel: LSM: Security Framework initializing Aug 13 00:51:47.040039 kernel: SELinux: Initializing. Aug 13 00:51:47.040047 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:51:47.040059 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:51:47.040068 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 00:51:47.040079 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 00:51:47.040089 kernel: signal: max sigframe size: 3632 Aug 13 00:51:47.040099 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:51:47.040109 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:51:47.040120 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:51:47.040130 kernel: x86: Booting SMP configuration: Aug 13 00:51:47.040142 kernel: .... node #0, CPUs: #1 Aug 13 00:51:47.040155 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 13 00:51:47.040170 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 00:51:47.040181 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:51:47.040191 kernel: smpboot: Max logical packages: 1 Aug 13 00:51:47.040199 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Aug 13 00:51:47.040209 kernel: devtmpfs: initialized Aug 13 00:51:47.040218 kernel: x86/mm: Memory block size: 128MB Aug 13 00:51:47.040229 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 13 00:51:47.040236 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:51:47.040246 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:51:47.040254 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:51:47.040261 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:51:47.040269 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:51:47.040276 kernel: audit: type=2000 audit(1755046305.024:1): state=initialized audit_enabled=0 res=1 Aug 13 00:51:47.040283 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:51:47.040290 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:51:47.040297 kernel: cpuidle: using governor menu Aug 13 00:51:47.040305 kernel: ACPI: bus type PCI registered Aug 13 00:51:47.040314 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:51:47.040321 kernel: dca service started, version 1.12.1 Aug 13 00:51:47.040329 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:51:47.040336 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:51:47.040343 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:51:47.040350 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:51:47.040358 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:51:47.040365 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:51:47.040372 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:51:47.040381 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:51:47.040389 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:51:47.040396 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:51:47.040403 kernel: ACPI: Interpreter enabled Aug 13 00:51:47.040410 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:51:47.040418 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:51:47.040425 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:51:47.040437 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 13 00:51:47.040444 kernel: iommu: Default domain type: Translated Aug 13 00:51:47.040457 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:51:47.040464 kernel: vgaarb: loaded Aug 13 00:51:47.040472 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:51:47.040482 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:51:47.040492 kernel: PTP clock support registered Aug 13 00:51:47.040500 kernel: Registered efivars operations Aug 13 00:51:47.040508 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:51:47.040518 kernel: PCI: System does not support PCI Aug 13 00:51:47.040528 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 13 00:51:47.040538 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:51:47.040548 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:51:47.040558 kernel: pnp: PnP ACPI init Aug 13 00:51:47.040566 kernel: pnp: PnP ACPI: found 3 devices Aug 13 00:51:47.040574 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:51:47.040584 kernel: NET: Registered PF_INET protocol family Aug 13 00:51:47.040595 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:51:47.040602 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 00:51:47.040621 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:51:47.040633 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:51:47.040642 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Aug 13 00:51:47.040651 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 00:51:47.040662 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:51:47.040669 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:51:47.040680 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:51:47.040688 kernel: NET: Registered PF_XDP protocol family Aug 13 00:51:47.040698 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:51:47.040705 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:51:47.040718 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Aug 13 00:51:47.040729 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:51:47.040736 kernel: Initialise system trusted keyrings Aug 13 00:51:47.040746 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 00:51:47.040754 kernel: Key type asymmetric registered Aug 13 00:51:47.040764 kernel: Asymmetric key parser 'x509' registered Aug 13 00:51:47.040772 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:51:47.040782 kernel: io scheduler mq-deadline registered Aug 13 00:51:47.040791 kernel: io scheduler kyber registered Aug 13 00:51:47.040802 kernel: io scheduler bfq registered Aug 13 00:51:47.040811 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:51:47.040820 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:51:47.040830 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:51:47.040838 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 00:51:47.040849 kernel: i8042: PNP: No PS/2 controller found. Aug 13 00:51:47.040994 kernel: rtc_cmos 00:02: registered as rtc0 Aug 13 00:51:47.041085 kernel: rtc_cmos 00:02: setting system clock to 2025-08-13T00:51:46 UTC (1755046306) Aug 13 00:51:47.041172 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 13 00:51:47.041183 kernel: intel_pstate: CPU model not supported Aug 13 00:51:47.041192 kernel: efifb: probing for efifb Aug 13 00:51:47.041203 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:51:47.041211 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:51:47.041221 kernel: efifb: scrolling: redraw Aug 13 00:51:47.041230 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:51:47.041240 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:51:47.041250 kernel: fb0: EFI VGA frame buffer device Aug 13 00:51:47.041260 kernel: pstore: Registered efi as persistent store backend Aug 13 00:51:47.041271 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:51:47.041279 kernel: Segment Routing with IPv6 Aug 13 00:51:47.041286 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:51:47.041296 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:51:47.041304 kernel: Key type dns_resolver registered Aug 13 00:51:47.041311 kernel: IPI shorthand broadcast: enabled Aug 13 00:51:47.041322 kernel: sched_clock: Marking stable (723323400, 20841300)->(909187600, -165022900) Aug 13 00:51:47.041329 kernel: registered taskstats version 1 Aug 13 00:51:47.041342 kernel: Loading compiled-in X.509 certificates Aug 13 00:51:47.041351 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:51:47.041360 kernel: Key type .fscrypt registered Aug 13 00:51:47.041367 kernel: Key type fscrypt-provisioning registered Aug 13 00:51:47.041378 kernel: pstore: Using crash dump compression: deflate Aug 13 00:51:47.041389 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:51:47.041397 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:51:47.041407 kernel: ima: No architecture policies found Aug 13 00:51:47.041418 kernel: clk: Disabling unused clocks Aug 13 00:51:47.041428 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:51:47.041435 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:51:47.041444 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:51:47.041453 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:51:47.041466 kernel: Run /init as init process Aug 13 00:51:47.041474 kernel: with arguments: Aug 13 00:51:47.041483 kernel: /init Aug 13 00:51:47.041492 kernel: with environment: Aug 13 00:51:47.041504 kernel: HOME=/ Aug 13 00:51:47.041512 kernel: TERM=linux Aug 13 00:51:47.041522 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:51:47.041535 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:51:47.041545 systemd[1]: Detected virtualization microsoft. Aug 13 00:51:47.041556 systemd[1]: Detected architecture x86-64. Aug 13 00:51:47.041567 systemd[1]: Running in initrd. Aug 13 00:51:47.041575 systemd[1]: No hostname configured, using default hostname. Aug 13 00:51:47.041587 systemd[1]: Hostname set to . Aug 13 00:51:47.041597 systemd[1]: Initializing machine ID from random generator. Aug 13 00:51:47.041614 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:51:47.041624 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:51:47.041635 systemd[1]: Reached target cryptsetup.target. Aug 13 00:51:47.041643 systemd[1]: Reached target paths.target. Aug 13 00:51:47.041654 systemd[1]: Reached target slices.target. Aug 13 00:51:47.041663 systemd[1]: Reached target swap.target. Aug 13 00:51:47.041675 systemd[1]: Reached target timers.target. Aug 13 00:51:47.041685 systemd[1]: Listening on iscsid.socket. Aug 13 00:51:47.041695 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:51:47.041706 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:51:47.041714 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:51:47.041724 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:51:47.041736 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:51:47.041744 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:51:47.041756 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:51:47.041767 systemd[1]: Reached target sockets.target. Aug 13 00:51:47.041775 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:51:47.041785 systemd[1]: Finished network-cleanup.service. Aug 13 00:51:47.041794 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:51:47.041805 systemd[1]: Starting systemd-journald.service... Aug 13 00:51:47.041813 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:51:47.041824 systemd[1]: Starting systemd-resolved.service... Aug 13 00:51:47.041835 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:51:47.041845 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:51:47.041856 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:51:47.041866 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:51:47.041875 kernel: audit: type=1130 audit(1755046307.039:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.041889 systemd-journald[183]: Journal started Aug 13 00:51:47.041939 systemd-journald[183]: Runtime Journal (/run/log/journal/07348b145acf476db229d08d9ea4145c) is 8.0M, max 159.0M, 151.0M free. Aug 13 00:51:47.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.007993 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 00:51:47.061233 systemd[1]: Started systemd-journald.service. Aug 13 00:51:47.064983 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:51:47.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.077802 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:51:47.080693 kernel: audit: type=1130 audit(1755046307.063:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.091623 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:51:47.097671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:51:47.112501 kernel: audit: type=1130 audit(1755046307.099:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.106085 systemd-resolved[185]: Positive Trust Anchors: Aug 13 00:51:47.146961 kernel: Bridge firewalling registered Aug 13 00:51:47.146990 kernel: audit: type=1130 audit(1755046307.129:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.106097 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:51:47.159919 kernel: audit: type=1130 audit(1755046307.141:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.106130 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:51:47.108883 systemd-resolved[185]: Defaulting to hostname 'linux'. Aug 13 00:51:47.179908 kernel: SCSI subsystem initialized Aug 13 00:51:47.110008 systemd[1]: Started systemd-resolved.service. Aug 13 00:51:47.113191 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 00:51:47.139799 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:51:47.142058 systemd[1]: Reached target nss-lookup.target. Aug 13 00:51:47.192476 dracut-cmdline[200]: dracut-dracut-053 Aug 13 00:51:47.144947 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:51:47.197419 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:47.228941 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:51:47.229006 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:51:47.234484 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:51:47.238564 systemd-modules-load[184]: Inserted module 'dm_multipath' Aug 13 00:51:47.241590 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:51:47.257623 kernel: audit: type=1130 audit(1755046307.245:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.257217 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:51:47.270387 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:51:47.285756 kernel: audit: type=1130 audit(1755046307.272:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.297631 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:51:47.316629 kernel: iscsi: registered transport (tcp) Aug 13 00:51:47.343319 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:51:47.343391 kernel: QLogic iSCSI HBA Driver Aug 13 00:51:47.372695 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:51:47.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.375876 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:51:47.388714 kernel: audit: type=1130 audit(1755046307.374:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.432630 kernel: raid6: avx512x4 gen() 18695 MB/s Aug 13 00:51:47.452620 kernel: raid6: avx512x4 xor() 8140 MB/s Aug 13 00:51:47.471617 kernel: raid6: avx512x2 gen() 18628 MB/s Aug 13 00:51:47.491619 kernel: raid6: avx512x2 xor() 29739 MB/s Aug 13 00:51:47.510616 kernel: raid6: avx512x1 gen() 18650 MB/s Aug 13 00:51:47.530614 kernel: raid6: avx512x1 xor() 26951 MB/s Aug 13 00:51:47.550617 kernel: raid6: avx2x4 gen() 18629 MB/s Aug 13 00:51:47.569620 kernel: raid6: avx2x4 xor() 8014 MB/s Aug 13 00:51:47.589616 kernel: raid6: avx2x2 gen() 18602 MB/s Aug 13 00:51:47.609617 kernel: raid6: avx2x2 xor() 22141 MB/s Aug 13 00:51:47.629615 kernel: raid6: avx2x1 gen() 14200 MB/s Aug 13 00:51:47.649614 kernel: raid6: avx2x1 xor() 19475 MB/s Aug 13 00:51:47.669625 kernel: raid6: sse2x4 gen() 11740 MB/s Aug 13 00:51:47.689614 kernel: raid6: sse2x4 xor() 7392 MB/s Aug 13 00:51:47.709616 kernel: raid6: sse2x2 gen() 12862 MB/s Aug 13 00:51:47.729617 kernel: raid6: sse2x2 xor() 7472 MB/s Aug 13 00:51:47.748617 kernel: raid6: sse2x1 gen() 11608 MB/s Aug 13 00:51:47.770928 kernel: raid6: sse2x1 xor() 5922 MB/s Aug 13 00:51:47.770946 kernel: raid6: using algorithm avx512x4 gen() 18695 MB/s Aug 13 00:51:47.770958 kernel: raid6: .... xor() 8140 MB/s, rmw enabled Aug 13 00:51:47.776961 kernel: raid6: using avx512x2 recovery algorithm Aug 13 00:51:47.792636 kernel: xor: automatically using best checksumming function avx Aug 13 00:51:47.888633 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:51:47.896753 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:51:47.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.900815 systemd[1]: Starting systemd-udevd.service... Aug 13 00:51:47.899000 audit: BPF prog-id=7 op=LOAD Aug 13 00:51:47.899000 audit: BPF prog-id=8 op=LOAD Aug 13 00:51:47.913199 kernel: audit: type=1130 audit(1755046307.898:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.925204 systemd-udevd[384]: Using default interface naming scheme 'v252'. Aug 13 00:51:47.932274 systemd[1]: Started systemd-udevd.service. Aug 13 00:51:47.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.937409 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:51:47.955160 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Aug 13 00:51:47.985048 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:51:47.988095 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:51:47.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:48.024815 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:51:48.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:48.070630 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:51:48.082636 kernel: hv_vmbus: Vmbus version:5.2 Aug 13 00:51:48.107632 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:51:48.107682 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:51:48.121628 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 00:51:48.146641 kernel: AES CTR mode by8 optimization enabled Aug 13 00:51:48.152006 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:51:48.156845 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:51:48.161625 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:51:48.167625 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:51:48.167665 kernel: scsi host0: storvsc_host_t Aug 13 00:51:48.170625 kernel: scsi host1: storvsc_host_t Aug 13 00:51:48.177633 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:51:48.177694 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 00:51:48.188623 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:51:48.188679 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:51:48.220333 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:51:48.229982 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:51:48.230005 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:51:48.248374 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:51:48.248561 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:51:48.248748 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:51:48.248908 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:51:48.249064 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:51:48.249222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:48.249241 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:51:48.299641 kernel: hv_netvsc 6045bdf1-b772-6045-bdf1-b7726045bdf1 eth0: VF slot 1 added Aug 13 00:51:48.307624 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:51:48.312628 kernel: hv_pci 65273b1a-7218-42f8-9696-df0adb5b3845: PCI VMBus probing: Using version 0x10004 Aug 13 00:51:48.376733 kernel: hv_pci 65273b1a-7218-42f8-9696-df0adb5b3845: PCI host bridge to bus 7218:00 Aug 13 00:51:48.376915 kernel: pci_bus 7218:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 13 00:51:48.377094 kernel: pci_bus 7218:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:51:48.377262 kernel: pci 7218:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 13 00:51:48.377435 kernel: pci 7218:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 00:51:48.377621 kernel: pci 7218:00:02.0: enabling Extended Tags Aug 13 00:51:48.377787 kernel: pci 7218:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7218:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 13 00:51:48.377942 kernel: pci_bus 7218:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:51:48.378086 kernel: pci 7218:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 00:51:48.469876 kernel: mlx5_core 7218:00:02.0: enabling device (0000 -> 0002) Aug 13 00:51:48.730936 kernel: mlx5_core 7218:00:02.0: firmware version: 14.30.5000 Aug 13 00:51:48.731071 kernel: mlx5_core 7218:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Aug 13 00:51:48.731178 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (437) Aug 13 00:51:48.731190 kernel: mlx5_core 7218:00:02.0: Supported tc offload range - chains: 1, prios: 1 Aug 13 00:51:48.731290 kernel: mlx5_core 7218:00:02.0: mlx5e_tc_post_act_init:40:(pid 213): firmware level support is missing Aug 13 00:51:48.731388 kernel: hv_netvsc 6045bdf1-b772-6045-bdf1-b7726045bdf1 eth0: VF registering: eth1 Aug 13 00:51:48.731482 kernel: mlx5_core 7218:00:02.0 eth1: joined to eth0 Aug 13 00:51:48.668539 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:51:48.694441 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:51:48.743625 kernel: mlx5_core 7218:00:02.0 enP29208s1: renamed from eth1 Aug 13 00:51:48.937826 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:51:48.982912 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:51:48.986052 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:51:48.992831 systemd[1]: Starting disk-uuid.service... Aug 13 00:51:49.007624 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:49.017626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:49.024627 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:50.028526 disk-uuid[559]: The operation has completed successfully. Aug 13 00:51:50.033772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:50.111771 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:51:50.111887 systemd[1]: Finished disk-uuid.service. Aug 13 00:51:50.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.118707 systemd[1]: Starting verity-setup.service... Aug 13 00:51:50.156627 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:51:50.559299 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:51:50.564489 systemd[1]: Finished verity-setup.service. Aug 13 00:51:50.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.568451 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:51:50.645636 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:51:50.645885 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:51:50.649274 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:51:50.653057 systemd[1]: Starting ignition-setup.service... Aug 13 00:51:50.655584 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:51:50.680435 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:50.680479 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:51:50.680493 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:51:50.726563 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:51:50.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.731000 audit: BPF prog-id=9 op=LOAD Aug 13 00:51:50.732285 systemd[1]: Starting systemd-networkd.service... Aug 13 00:51:50.755594 systemd-networkd[823]: lo: Link UP Aug 13 00:51:50.755612 systemd-networkd[823]: lo: Gained carrier Aug 13 00:51:50.759017 systemd-networkd[823]: Enumeration completed Aug 13 00:51:50.759112 systemd[1]: Started systemd-networkd.service. Aug 13 00:51:50.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.764348 systemd[1]: Reached target network.target. Aug 13 00:51:50.766495 systemd-networkd[823]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:51:50.767118 systemd[1]: Starting iscsiuio.service... Aug 13 00:51:50.779783 systemd[1]: Started iscsiuio.service. Aug 13 00:51:50.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.782948 systemd[1]: Starting iscsid.service... Aug 13 00:51:50.789460 iscsid[828]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:51:50.789460 iscsid[828]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:51:50.789460 iscsid[828]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:51:50.789460 iscsid[828]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:51:50.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.820047 iscsid[828]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:51:50.820047 iscsid[828]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:51:50.833716 kernel: mlx5_core 7218:00:02.0 enP29208s1: Link up Aug 13 00:51:50.790967 systemd[1]: Started iscsid.service. Aug 13 00:51:50.795203 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:51:50.808514 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:51:50.811976 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:51:50.816326 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:51:50.818220 systemd[1]: Reached target remote-fs.target. Aug 13 00:51:50.820786 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:51:50.847628 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:51:50.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.861642 kernel: hv_netvsc 6045bdf1-b772-6045-bdf1-b7726045bdf1 eth0: Data path switched to VF: enP29208s1 Aug 13 00:51:50.866368 systemd-networkd[823]: enP29208s1: Link UP Aug 13 00:51:50.868559 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:51:50.866503 systemd-networkd[823]: eth0: Link UP Aug 13 00:51:50.866718 systemd-networkd[823]: eth0: Gained carrier Aug 13 00:51:50.870115 systemd-networkd[823]: enP29208s1: Gained carrier Aug 13 00:51:50.882664 systemd-networkd[823]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 00:51:50.977694 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:51:51.083407 systemd[1]: Finished ignition-setup.service. Aug 13 00:51:51.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:51.087002 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:51:52.759843 systemd-networkd[823]: eth0: Gained IPv6LL Aug 13 00:51:55.021464 ignition[850]: Ignition 2.14.0 Aug 13 00:51:55.021483 ignition[850]: Stage: fetch-offline Aug 13 00:51:55.021569 ignition[850]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:55.021652 ignition[850]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:55.158209 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:55.158387 ignition[850]: parsed url from cmdline: "" Aug 13 00:51:55.158391 ignition[850]: no config URL provided Aug 13 00:51:55.158397 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:51:55.158405 ignition[850]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:51:55.158411 ignition[850]: failed to fetch config: resource requires networking Aug 13 00:51:55.159667 ignition[850]: Ignition finished successfully Aug 13 00:51:55.173430 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:51:55.182342 kernel: kauditd_printk_skb: 16 callbacks suppressed Aug 13 00:51:55.182381 kernel: audit: type=1130 audit(1755046315.177:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.178353 systemd[1]: Starting ignition-fetch.service... Aug 13 00:51:55.187245 ignition[856]: Ignition 2.14.0 Aug 13 00:51:55.187251 ignition[856]: Stage: fetch Aug 13 00:51:55.187349 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:55.187374 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:55.190816 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:55.193744 ignition[856]: parsed url from cmdline: "" Aug 13 00:51:55.193748 ignition[856]: no config URL provided Aug 13 00:51:55.193754 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:51:55.193766 ignition[856]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:51:55.193817 ignition[856]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:51:55.269647 ignition[856]: GET result: OK Aug 13 00:51:55.269854 ignition[856]: config has been read from IMDS userdata Aug 13 00:51:55.269915 ignition[856]: parsing config with SHA512: 2f2b9a773f4f4c1bd5beb01bc6fabf97f97ef700937b2f6c32cb33a6cd038743e947cf3221ccb3cbe8c5b1f67c8d48c1a769dbd7cacece71868e6d6ad266fdb0 Aug 13 00:51:55.275351 unknown[856]: fetched base config from "system" Aug 13 00:51:55.275363 unknown[856]: fetched base config from "system" Aug 13 00:51:55.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.275971 ignition[856]: fetch: fetch complete Aug 13 00:51:55.294671 kernel: audit: type=1130 audit(1755046315.278:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.275387 unknown[856]: fetched user config from "azure" Aug 13 00:51:55.275976 ignition[856]: fetch: fetch passed Aug 13 00:51:55.277425 systemd[1]: Finished ignition-fetch.service. Aug 13 00:51:55.276017 ignition[856]: Ignition finished successfully Aug 13 00:51:55.280187 systemd[1]: Starting ignition-kargs.service... Aug 13 00:51:55.310106 ignition[862]: Ignition 2.14.0 Aug 13 00:51:55.310117 ignition[862]: Stage: kargs Aug 13 00:51:55.310238 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:55.310265 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:55.313079 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:55.318216 ignition[862]: kargs: kargs passed Aug 13 00:51:55.318265 ignition[862]: Ignition finished successfully Aug 13 00:51:55.338000 kernel: audit: type=1130 audit(1755046315.321:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.319166 systemd[1]: Finished ignition-kargs.service. Aug 13 00:51:55.332041 ignition[868]: Ignition 2.14.0 Aug 13 00:51:55.323622 systemd[1]: Starting ignition-disks.service... Aug 13 00:51:55.332050 ignition[868]: Stage: disks Aug 13 00:51:55.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.342133 systemd[1]: Finished ignition-disks.service. Aug 13 00:51:55.359158 kernel: audit: type=1130 audit(1755046315.344:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.332168 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:55.345232 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:51:55.332202 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:55.359178 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:51:55.339012 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:55.363538 systemd[1]: Reached target local-fs.target. Aug 13 00:51:55.341221 ignition[868]: disks: disks passed Aug 13 00:51:55.341348 ignition[868]: Ignition finished successfully Aug 13 00:51:55.374651 systemd[1]: Reached target sysinit.target. Aug 13 00:51:55.376215 systemd[1]: Reached target basic.target. Aug 13 00:51:55.380814 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:51:55.448389 systemd-fsck[876]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Aug 13 00:51:55.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.456973 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:51:55.473551 kernel: audit: type=1130 audit(1755046315.458:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.459947 systemd[1]: Mounting sysroot.mount... Aug 13 00:51:55.488653 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:51:55.489108 systemd[1]: Mounted sysroot.mount. Aug 13 00:51:55.490748 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:51:55.526430 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:51:55.531505 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 00:51:55.536400 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:51:55.537272 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:51:55.545092 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:51:55.602514 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:51:55.605331 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:51:55.630391 initrd-setup-root[892]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:51:55.636221 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (887) Aug 13 00:51:55.636243 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:55.636256 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:51:55.636269 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:51:55.645940 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:51:55.670272 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:51:55.692171 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:51:55.717426 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:51:56.340574 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:51:56.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:56.344741 systemd[1]: Starting ignition-mount.service... Aug 13 00:51:56.365229 kernel: audit: type=1130 audit(1755046316.343:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:56.363777 systemd[1]: Starting sysroot-boot.service... Aug 13 00:51:56.369854 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 00:51:56.369995 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 00:51:56.393341 systemd[1]: Finished sysroot-boot.service. Aug 13 00:51:56.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:56.409630 kernel: audit: type=1130 audit(1755046316.394:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:56.441747 ignition[956]: INFO : Ignition 2.14.0 Aug 13 00:51:56.441747 ignition[956]: INFO : Stage: mount Aug 13 00:51:56.445652 ignition[956]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:56.445652 ignition[956]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:56.455941 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:56.455941 ignition[956]: INFO : mount: mount passed Aug 13 00:51:56.455941 ignition[956]: INFO : Ignition finished successfully Aug 13 00:51:56.472519 kernel: audit: type=1130 audit(1755046316.455:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:56.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:56.449644 systemd[1]: Finished ignition-mount.service. Aug 13 00:51:57.283194 coreos-metadata[886]: Aug 13 00:51:57.283 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:51:57.302852 coreos-metadata[886]: Aug 13 00:51:57.302 INFO Fetch successful Aug 13 00:51:57.338678 coreos-metadata[886]: Aug 13 00:51:57.338 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:51:57.353983 coreos-metadata[886]: Aug 13 00:51:57.353 INFO Fetch successful Aug 13 00:51:57.374917 coreos-metadata[886]: Aug 13 00:51:57.374 INFO wrote hostname ci-3510.3.8-a-1859c445b4 to /sysroot/etc/hostname Aug 13 00:51:57.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:57.377158 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 00:51:57.393874 kernel: audit: type=1130 audit(1755046317.380:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:57.381759 systemd[1]: Starting ignition-files.service... Aug 13 00:51:57.400082 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:51:57.421705 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (965) Aug 13 00:51:57.421745 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:57.421760 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:51:57.428502 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:51:57.438096 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:51:57.450292 ignition[984]: INFO : Ignition 2.14.0 Aug 13 00:51:57.450292 ignition[984]: INFO : Stage: files Aug 13 00:51:57.453931 ignition[984]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:57.453931 ignition[984]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:57.466394 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:57.505575 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:51:57.523986 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:51:57.523986 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:51:57.591347 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:51:57.594641 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:51:57.626841 unknown[984]: wrote ssh authorized keys file for user: core Aug 13 00:51:57.629365 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:51:57.649812 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:51:57.654159 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:51:57.654159 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:51:57.654159 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:51:57.715167 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:51:57.959692 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:51:57.964345 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:57.964345 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:57.964345 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:57.964345 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:57.964345 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:57.964345 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:57.964345 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:57.964345 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:57.995936 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:57.995936 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:57.995936 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:57.995936 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:57.995936 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:51:57.995936 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:51:57.995936 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3725774039" Aug 13 00:51:57.995936 ignition[984]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3725774039": device or resource busy Aug 13 00:51:57.995936 ignition[984]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3725774039", trying btrfs: device or resource busy Aug 13 00:51:57.995936 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3725774039" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3725774039" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3725774039" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3725774039" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem66130585" Aug 13 00:51:58.044594 ignition[984]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem66130585": device or resource busy Aug 13 00:51:58.044594 ignition[984]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem66130585", trying btrfs: device or resource busy Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem66130585" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem66130585" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem66130585" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem66130585" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:51:58.044594 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:58.004030 systemd[1]: mnt-oem3725774039.mount: Deactivated successfully. Aug 13 00:51:58.116288 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:51:58.020252 systemd[1]: mnt-oem66130585.mount: Deactivated successfully. Aug 13 00:51:58.493182 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Aug 13 00:51:58.676551 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:58.676551 ignition[984]: INFO : files: op(14): [started] processing unit "waagent.service" Aug 13 00:51:58.676551 ignition[984]: INFO : files: op(14): [finished] processing unit "waagent.service" Aug 13 00:51:58.676551 ignition[984]: INFO : files: op(15): [started] processing unit "nvidia.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(15): [finished] processing unit "nvidia.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(16): [started] processing unit "containerd.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(16): [finished] processing unit "containerd.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:58.691462 ignition[984]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:58.745324 ignition[984]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:58.745324 ignition[984]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:58.745324 ignition[984]: INFO : files: files passed Aug 13 00:51:58.745324 ignition[984]: INFO : Ignition finished successfully Aug 13 00:51:58.757444 systemd[1]: Finished ignition-files.service. Aug 13 00:51:58.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.762680 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:51:58.779186 kernel: audit: type=1130 audit(1755046318.759:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.774047 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:51:58.774775 systemd[1]: Starting ignition-quench.service... Aug 13 00:51:58.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.790027 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:51:58.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.778022 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:51:58.778116 systemd[1]: Finished ignition-quench.service. Aug 13 00:51:58.786624 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:51:58.790020 systemd[1]: Reached target ignition-complete.target. Aug 13 00:51:58.792684 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:51:58.811350 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:51:58.811429 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:51:58.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.817055 systemd[1]: Reached target initrd-fs.target. Aug 13 00:51:58.820415 systemd[1]: Reached target initrd.target. Aug 13 00:51:58.823557 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:51:58.826886 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:51:58.837153 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:51:58.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.841305 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:51:58.850642 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:51:58.854382 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:51:58.856706 systemd[1]: Stopped target timers.target. Aug 13 00:51:58.860376 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:51:58.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.860516 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:51:58.864057 systemd[1]: Stopped target initrd.target. Aug 13 00:51:58.867681 systemd[1]: Stopped target basic.target. Aug 13 00:51:58.870971 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:51:58.874459 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:51:58.877856 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:51:58.881697 systemd[1]: Stopped target remote-fs.target. Aug 13 00:51:58.885139 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:51:58.888780 systemd[1]: Stopped target sysinit.target. Aug 13 00:51:58.892067 systemd[1]: Stopped target local-fs.target. Aug 13 00:51:58.895469 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:51:58.898864 systemd[1]: Stopped target swap.target. Aug 13 00:51:58.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.902663 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:51:58.902811 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:51:58.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.906680 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:51:58.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.910156 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:51:58.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.910298 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:51:58.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.914652 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:51:58.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.914782 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:51:58.918646 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:51:58.956898 ignition[1023]: INFO : Ignition 2.14.0 Aug 13 00:51:58.956898 ignition[1023]: INFO : Stage: umount Aug 13 00:51:58.956898 ignition[1023]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:58.956898 ignition[1023]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:58.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.918770 systemd[1]: Stopped ignition-files.service. Aug 13 00:51:58.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.983956 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:58.983956 ignition[1023]: INFO : umount: umount passed Aug 13 00:51:58.983956 ignition[1023]: INFO : Ignition finished successfully Aug 13 00:51:58.922177 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:51:58.922304 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 00:51:58.927940 systemd[1]: Stopping ignition-mount.service... Aug 13 00:51:58.933713 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:51:58.933914 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:51:58.937654 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:51:58.939616 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:51:58.939787 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:51:58.942165 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:51:58.942321 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:51:58.947309 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:51:58.947401 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:51:58.963377 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:51:59.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:58.963468 systemd[1]: Stopped ignition-mount.service. Aug 13 00:51:58.966363 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:51:58.966408 systemd[1]: Stopped ignition-disks.service. Aug 13 00:51:58.973506 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:51:58.973557 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:51:58.975405 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:51:58.975454 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:51:58.977256 systemd[1]: Stopped target network.target. Aug 13 00:51:58.979836 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:51:58.979889 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:51:58.983919 systemd[1]: Stopped target paths.target. Aug 13 00:51:58.988218 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:51:58.992037 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:51:59.008285 systemd[1]: Stopped target slices.target. Aug 13 00:51:59.014470 systemd[1]: Stopped target sockets.target. Aug 13 00:51:59.016802 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:51:59.016846 systemd[1]: Closed iscsid.socket. Aug 13 00:51:59.020460 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:51:59.020493 systemd[1]: Closed iscsiuio.socket. Aug 13 00:51:59.024580 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:51:59.026219 systemd[1]: Stopped ignition-setup.service. Aug 13 00:51:59.035688 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:51:59.038882 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:51:59.040676 systemd-networkd[823]: eth0: DHCPv6 lease lost Aug 13 00:51:59.080561 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:51:59.084918 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:51:59.087179 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:51:59.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.091569 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:51:59.093897 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:51:59.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.098075 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:51:59.099000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:51:59.099000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:51:59.098125 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:51:59.104867 systemd[1]: Stopping network-cleanup.service... Aug 13 00:51:59.108409 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:51:59.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.108476 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:51:59.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.110582 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:51:59.110661 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:51:59.114822 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:51:59.114872 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:51:59.119179 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:51:59.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.124496 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:51:59.128410 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:51:59.128548 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:51:59.133212 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:51:59.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.133263 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:51:59.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.138040 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:51:59.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.138084 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:51:59.144368 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:51:59.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.144418 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:51:59.172863 kernel: hv_netvsc 6045bdf1-b772-6045-bdf1-b7726045bdf1 eth0: Data path switched from VF: enP29208s1 Aug 13 00:51:59.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.148515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:51:59.148564 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:51:59.152433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:51:59.152480 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:51:59.157651 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:51:59.160522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:51:59.160579 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:51:59.164534 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:51:59.164631 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:51:59.194057 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:51:59.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.194175 systemd[1]: Stopped network-cleanup.service. Aug 13 00:51:59.484435 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:51:59.484577 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:51:59.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.491206 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:51:59.493330 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:51:59.493387 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:51:59.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:59.502216 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:51:59.570127 systemd[1]: Switching root. Aug 13 00:51:59.572000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:51:59.572000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:51:59.572000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:51:59.572000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:51:59.572000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:51:59.601098 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 00:51:59.601183 iscsid[828]: iscsid shutting down. Aug 13 00:51:59.602861 systemd-journald[183]: Journal stopped Aug 13 00:52:21.097427 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:52:21.097456 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:52:21.097470 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:52:21.097478 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:52:21.097489 kernel: SELinux: policy capability open_perms=1 Aug 13 00:52:21.097499 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:52:21.097510 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:52:21.097521 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:52:21.097531 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:52:21.097542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:52:21.097550 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:52:21.097560 kernel: kauditd_printk_skb: 46 callbacks suppressed Aug 13 00:52:21.097570 kernel: audit: type=1403 audit(1755046324.167:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:52:21.097583 systemd[1]: Successfully loaded SELinux policy in 387.754ms. Aug 13 00:52:21.097597 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 39.353ms. Aug 13 00:52:21.097619 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:52:21.097629 systemd[1]: Detected virtualization microsoft. Aug 13 00:52:21.097641 systemd[1]: Detected architecture x86-64. Aug 13 00:52:21.097652 systemd[1]: Detected first boot. Aug 13 00:52:21.097664 systemd[1]: Hostname set to . Aug 13 00:52:21.097676 systemd[1]: Initializing machine ID from random generator. Aug 13 00:52:21.097687 kernel: audit: type=1400 audit(1755046325.170:84): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:52:21.097698 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:52:21.097709 kernel: audit: type=1400 audit(1755046326.791:85): avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:52:21.097721 kernel: audit: type=1300 audit(1755046326.791:85): arch=c000003e syscall=188 success=yes exit=0 a0=c0001072d2 a1=c00002c600 a2=c00002a800 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.097735 kernel: audit: type=1327 audit(1755046326.791:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:52:21.097744 kernel: audit: type=1400 audit(1755046326.799:86): avc: denied { associate } for pid=1075 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:52:21.097756 kernel: audit: type=1300 audit(1755046326.799:86): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001073b9 a2=1ed a3=0 items=2 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.097768 kernel: audit: type=1307 audit(1755046326.799:86): cwd="/" Aug 13 00:52:21.097777 kernel: audit: type=1302 audit(1755046326.799:86): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:21.097789 kernel: audit: type=1302 audit(1755046326.799:86): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:21.097801 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:52:21.097812 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:52:21.097823 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:52:21.097836 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:52:21.097847 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:52:21.097859 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 00:52:21.097874 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:52:21.097885 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:52:21.097895 systemd[1]: Created slice system-getty.slice. Aug 13 00:52:21.097909 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:52:21.097922 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:52:21.097932 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:52:21.097945 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:52:21.097957 systemd[1]: Created slice user.slice. Aug 13 00:52:21.097967 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:52:21.097981 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:52:21.097994 systemd[1]: Set up automount boot.automount. Aug 13 00:52:21.098003 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:52:21.098015 systemd[1]: Reached target integritysetup.target. Aug 13 00:52:21.098026 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:52:21.098037 systemd[1]: Reached target remote-fs.target. Aug 13 00:52:21.098048 systemd[1]: Reached target slices.target. Aug 13 00:52:21.098059 systemd[1]: Reached target swap.target. Aug 13 00:52:21.098071 systemd[1]: Reached target torcx.target. Aug 13 00:52:21.098083 systemd[1]: Reached target veritysetup.target. Aug 13 00:52:21.098095 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:52:21.098109 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:52:21.098118 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:52:21.098130 kernel: audit: type=1400 audit(1755046340.736:87): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:52:21.098142 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:52:21.098152 kernel: audit: type=1335 audit(1755046340.736:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:52:21.098166 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:52:21.098178 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:52:21.098188 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:52:21.098200 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:52:21.098211 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:52:21.098225 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:52:21.098236 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:52:21.098247 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:52:21.098260 systemd[1]: Mounting media.mount... Aug 13 00:52:21.098270 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:21.098283 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:52:21.098295 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:52:21.098305 systemd[1]: Mounting tmp.mount... Aug 13 00:52:21.098318 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:52:21.098333 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:52:21.098343 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:52:21.098357 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:52:21.098370 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:52:21.098380 systemd[1]: Starting modprobe@drm.service... Aug 13 00:52:21.098392 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:52:21.098404 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:52:21.098415 systemd[1]: Starting modprobe@loop.service... Aug 13 00:52:21.098425 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:52:21.098437 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:52:21.098450 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:52:21.098461 systemd[1]: Starting systemd-journald.service... Aug 13 00:52:21.098472 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:52:21.098485 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:52:21.098496 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:52:21.098507 kernel: loop: module loaded Aug 13 00:52:21.098518 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:52:21.098533 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:21.098543 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:52:21.098556 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:52:21.098568 systemd[1]: Mounted media.mount. Aug 13 00:52:21.098578 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:52:21.098591 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:52:21.098610 systemd[1]: Mounted tmp.mount. Aug 13 00:52:21.098623 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:52:21.098635 kernel: audit: type=1305 audit(1755046341.094:89): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:52:21.098651 systemd-journald[1169]: Journal started Aug 13 00:52:21.098695 systemd-journald[1169]: Runtime Journal (/run/log/journal/d8f3317a01774cc5939c1a5a2d767b5c) is 8.0M, max 159.0M, 151.0M free. Aug 13 00:52:20.736000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:52:21.094000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:52:21.104348 systemd[1]: Started systemd-journald.service. Aug 13 00:52:21.107629 kernel: audit: type=1300 audit(1755046341.094:89): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe425c2c50 a2=4000 a3=7ffe425c2cec items=0 ppid=1 pid=1169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.094000 audit[1169]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe425c2c50 a2=4000 a3=7ffe425c2cec items=0 ppid=1 pid=1169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.127714 kernel: audit: type=1327 audit(1755046341.094:89): proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:52:21.094000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:52:21.127641 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:52:21.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.143328 kernel: audit: type=1130 audit(1755046341.100:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.157006 kernel: audit: type=1130 audit(1755046341.126:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.157035 kernel: fuse: init (API version 7.34) Aug 13 00:52:21.159673 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:52:21.159900 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:52:21.171659 kernel: audit: type=1130 audit(1755046341.158:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.175520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:52:21.175709 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:52:21.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.198749 kernel: audit: type=1130 audit(1755046341.174:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.198791 kernel: audit: type=1131 audit(1755046341.174:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.199550 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:52:21.199780 systemd[1]: Finished modprobe@drm.service. Aug 13 00:52:21.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.201968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:52:21.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.202174 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:52:21.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.204463 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:52:21.204643 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:52:21.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.206852 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:52:21.207093 systemd[1]: Finished modprobe@loop.service. Aug 13 00:52:21.209656 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:52:21.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.212514 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:52:21.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.215138 systemd[1]: Reached target network-pre.target. Aug 13 00:52:21.218410 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:52:21.222141 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:52:21.224031 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:52:21.286398 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:52:21.290098 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:52:21.292211 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:52:21.293293 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:52:21.295209 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:52:21.296347 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:52:21.300552 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:52:21.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.303044 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:52:21.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.305397 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:52:21.307560 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:52:21.311246 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:52:21.314191 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:52:21.343413 systemd-journald[1169]: Time spent on flushing to /var/log/journal/d8f3317a01774cc5939c1a5a2d767b5c is 16.801ms for 1084 entries. Aug 13 00:52:21.343413 systemd-journald[1169]: System Journal (/var/log/journal/d8f3317a01774cc5939c1a5a2d767b5c) is 8.0M, max 2.6G, 2.6G free. Aug 13 00:52:21.449744 systemd-journald[1169]: Received client request to flush runtime journal. Aug 13 00:52:21.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.418741 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:52:21.450140 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:52:21.421821 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:52:21.450734 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:52:21.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:21.502011 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:52:22.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:22.263331 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:52:22.267592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:52:23.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:23.152944 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:52:23.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:23.402037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:52:23.407324 systemd[1]: Starting systemd-udevd.service... Aug 13 00:52:23.425933 systemd-udevd[1238]: Using default interface naming scheme 'v252'. Aug 13 00:52:24.690743 systemd[1]: Started systemd-udevd.service. Aug 13 00:52:24.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:24.695421 systemd[1]: Starting systemd-networkd.service... Aug 13 00:52:24.731732 systemd[1]: Found device dev-ttyS0.device. Aug 13 00:52:24.811630 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:52:24.825000 audit[1255]: AVC avc: denied { confidentiality } for pid=1255 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:52:24.844635 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:52:24.849670 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:52:24.861625 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:52:24.861684 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:52:24.868809 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:52:24.825000 audit[1255]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5577f73dbac0 a1=f83c a2=7fe42b713bc5 a3=5 items=12 ppid=1238 pid=1255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:24.825000 audit: CWD cwd="/" Aug 13 00:52:24.825000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=1 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=2 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.877241 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:52:24.877289 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:52:24.877313 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:52:24.825000 audit: PATH item=3 name=(null) inode=14194 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=4 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=5 name=(null) inode=14195 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=6 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=7 name=(null) inode=14196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=8 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=9 name=(null) inode=14197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=10 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PATH item=11 name=(null) inode=14198 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:24.825000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:52:24.886469 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:52:24.886526 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:52:24.891801 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:52:25.020638 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:52:25.022925 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:52:25.101848 systemd[1]: Started systemd-userdbd.service. Aug 13 00:52:25.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:25.328586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:52:25.410878 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Aug 13 00:52:25.490330 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:52:25.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:25.495046 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:52:25.868294 lvm[1316]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:52:25.923559 systemd-networkd[1244]: lo: Link UP Aug 13 00:52:25.923572 systemd-networkd[1244]: lo: Gained carrier Aug 13 00:52:25.924337 systemd-networkd[1244]: Enumeration completed Aug 13 00:52:25.924532 systemd[1]: Started systemd-networkd.service. Aug 13 00:52:25.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:25.928791 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:52:25.930294 kernel: kauditd_printk_skb: 39 callbacks suppressed Aug 13 00:52:25.930359 kernel: audit: type=1130 audit(1755046345.926:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:25.948112 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:52:25.950548 systemd[1]: Reached target cryptsetup.target. Aug 13 00:52:25.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:25.961936 kernel: audit: type=1130 audit(1755046345.949:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:25.962510 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:52:25.965173 systemd[1]: Starting lvm2-activation.service... Aug 13 00:52:25.971821 lvm[1319]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:52:25.991805 systemd[1]: Finished lvm2-activation.service. Aug 13 00:52:25.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:25.994315 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:52:26.007586 kernel: audit: type=1130 audit(1755046345.994:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.006744 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:52:26.006780 systemd[1]: Reached target local-fs.target. Aug 13 00:52:26.008516 systemd[1]: Reached target machines.target. Aug 13 00:52:26.012078 systemd[1]: Starting ldconfig.service... Aug 13 00:52:26.023872 kernel: mlx5_core 7218:00:02.0 enP29208s1: Link up Aug 13 00:52:26.042872 kernel: hv_netvsc 6045bdf1-b772-6045-bdf1-b7726045bdf1 eth0: Data path switched to VF: enP29208s1 Aug 13 00:52:26.043569 systemd-networkd[1244]: enP29208s1: Link UP Aug 13 00:52:26.043714 systemd-networkd[1244]: eth0: Link UP Aug 13 00:52:26.043725 systemd-networkd[1244]: eth0: Gained carrier Aug 13 00:52:26.049126 systemd-networkd[1244]: enP29208s1: Gained carrier Aug 13 00:52:26.056812 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:52:26.056912 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:26.058191 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:52:26.061344 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:52:26.065223 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:52:26.069142 systemd[1]: Starting systemd-sysext.service... Aug 13 00:52:26.071971 systemd-networkd[1244]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 00:52:26.525719 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1322 (bootctl) Aug 13 00:52:26.527450 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:52:26.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.580083 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:52:26.592874 kernel: audit: type=1130 audit(1755046346.579:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.599809 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:52:26.605845 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:52:26.606229 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:52:26.661884 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:52:26.745877 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:52:26.762874 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 00:52:26.767629 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:52:26.768326 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:52:26.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.781875 kernel: audit: type=1130 audit(1755046346.770:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.792577 (sd-sysext)[1337]: Using extensions 'kubernetes'. Aug 13 00:52:26.793127 (sd-sysext)[1337]: Merged extensions into '/usr'. Aug 13 00:52:26.810582 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:26.812158 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:52:26.814965 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:52:26.816770 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:52:26.820054 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:52:26.823644 systemd[1]: Starting modprobe@loop.service... Aug 13 00:52:26.824659 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:52:26.824831 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:26.824997 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:26.845314 kernel: audit: type=1130 audit(1755046346.833:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.831920 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:52:26.833122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:52:26.833267 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:52:26.846245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:52:26.846452 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:52:26.847664 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:52:26.847969 systemd[1]: Finished modprobe@loop.service. Aug 13 00:52:26.848435 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:52:26.848536 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:52:26.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.862127 kernel: audit: type=1131 audit(1755046346.845:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.860847 systemd[1]: Finished systemd-sysext.service. Aug 13 00:52:26.868956 systemd[1]: Starting ensure-sysext.service... Aug 13 00:52:26.873516 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:52:26.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.895230 systemd[1]: Reloading. Aug 13 00:52:26.908881 kernel: audit: type=1130 audit(1755046346.845:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.908956 kernel: audit: type=1131 audit(1755046346.845:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.909563 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:52:26.921121 kernel: audit: type=1130 audit(1755046346.845:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:26.932748 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:52:26.958152 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:52:26.962475 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2025-08-13T00:52:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:52:26.962513 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2025-08-13T00:52:26Z" level=info msg="torcx already run" Aug 13 00:52:27.061338 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:52:27.061359 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:52:27.078328 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:52:27.160571 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:27.160904 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:52:27.162320 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:52:27.165042 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:52:27.167698 systemd[1]: Starting modprobe@loop.service... Aug 13 00:52:27.168649 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:52:27.168845 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:27.169066 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:27.170541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:52:27.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.170932 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:52:27.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.178845 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:52:27.179116 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:52:27.180441 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:52:27.180591 systemd[1]: Finished modprobe@loop.service. Aug 13 00:52:27.181959 systemd[1]: Finished ensure-sysext.service. Aug 13 00:52:27.183061 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:27.183309 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:52:27.184353 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:52:27.186459 systemd[1]: Starting modprobe@drm.service... Aug 13 00:52:27.187415 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:52:27.187489 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:27.187571 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:52:27.187665 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:27.202172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:52:27.202377 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:52:27.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.203452 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:52:27.204236 systemd[1]: Finished modprobe@drm.service. Aug 13 00:52:27.205484 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:52:27.252981 systemd-networkd[1244]: eth0: Gained IPv6LL Aug 13 00:52:27.254977 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:52:27.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.897560 systemd-fsck[1334]: fsck.fat 4.2 (2021-01-31) Aug 13 00:52:27.897560 systemd-fsck[1334]: /dev/sda1: 789 files, 119324/258078 clusters Aug 13 00:52:27.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:27.899843 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:52:27.904411 systemd[1]: Mounting boot.mount... Aug 13 00:52:27.925095 systemd[1]: Mounted boot.mount. Aug 13 00:52:27.945581 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:52:27.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:30.142074 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:52:30.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:30.146755 systemd[1]: Starting audit-rules.service... Aug 13 00:52:30.150564 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:52:30.155570 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:52:30.160026 systemd[1]: Starting systemd-resolved.service... Aug 13 00:52:30.166518 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:52:30.172495 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:52:30.175045 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:52:30.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:30.177443 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:52:30.207000 audit[1471]: SYSTEM_BOOT pid=1471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:52:30.212780 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:52:30.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:30.373735 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:52:30.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:30.376544 systemd[1]: Reached target time-set.target. Aug 13 00:52:30.399277 systemd-resolved[1468]: Positive Trust Anchors: Aug 13 00:52:30.399291 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:52:30.399346 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:52:30.597280 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:52:30.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:30.606000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:52:30.606000 audit[1487]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc16464ee0 a2=420 a3=0 items=0 ppid=1464 pid=1487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:30.607790 augenrules[1487]: No rules Aug 13 00:52:30.606000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:52:30.608305 systemd[1]: Finished audit-rules.service. Aug 13 00:52:30.662675 systemd-resolved[1468]: Using system hostname 'ci-3510.3.8-a-1859c445b4'. Aug 13 00:52:30.664566 systemd[1]: Started systemd-resolved.service. Aug 13 00:52:30.667184 systemd[1]: Reached target network.target. Aug 13 00:52:30.669204 systemd[1]: Reached target network-online.target. Aug 13 00:52:30.669240 systemd-timesyncd[1469]: Contacted time server 131.111.8.60:123 (0.flatcar.pool.ntp.org). Aug 13 00:52:30.669315 systemd-timesyncd[1469]: Initial clock synchronization to Wed 2025-08-13 00:52:30.669857 UTC. Aug 13 00:52:30.671708 systemd[1]: Reached target nss-lookup.target. Aug 13 00:52:37.302160 ldconfig[1321]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:52:37.312522 systemd[1]: Finished ldconfig.service. Aug 13 00:52:37.317094 systemd[1]: Starting systemd-update-done.service... Aug 13 00:52:37.346537 systemd[1]: Finished systemd-update-done.service. Aug 13 00:52:37.349099 systemd[1]: Reached target sysinit.target. Aug 13 00:52:37.351224 systemd[1]: Started motdgen.path. Aug 13 00:52:37.352932 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:52:37.355607 systemd[1]: Started logrotate.timer. Aug 13 00:52:37.357282 systemd[1]: Started mdadm.timer. Aug 13 00:52:37.358836 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:52:37.360653 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:52:37.360692 systemd[1]: Reached target paths.target. Aug 13 00:52:37.362325 systemd[1]: Reached target timers.target. Aug 13 00:52:37.364510 systemd[1]: Listening on dbus.socket. Aug 13 00:52:37.367326 systemd[1]: Starting docker.socket... Aug 13 00:52:37.403323 systemd[1]: Listening on sshd.socket. Aug 13 00:52:37.405219 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:37.405795 systemd[1]: Listening on docker.socket. Aug 13 00:52:37.407570 systemd[1]: Reached target sockets.target. Aug 13 00:52:37.409370 systemd[1]: Reached target basic.target. Aug 13 00:52:37.411487 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:52:37.411550 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:52:37.411584 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:52:37.412688 systemd[1]: Starting containerd.service... Aug 13 00:52:37.415815 systemd[1]: Starting dbus.service... Aug 13 00:52:37.419414 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:52:37.422810 systemd[1]: Starting extend-filesystems.service... Aug 13 00:52:37.424674 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:52:37.441907 systemd[1]: Starting kubelet.service... Aug 13 00:52:37.445089 systemd[1]: Starting motdgen.service... Aug 13 00:52:37.448145 systemd[1]: Started nvidia.service. Aug 13 00:52:37.481564 systemd[1]: Starting prepare-helm.service... Aug 13 00:52:37.485037 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:52:37.488933 systemd[1]: Starting sshd-keygen.service... Aug 13 00:52:37.495420 systemd[1]: Starting systemd-logind.service... Aug 13 00:52:37.497442 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:37.497542 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:52:37.501131 systemd[1]: Starting update-engine.service... Aug 13 00:52:37.504382 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:52:37.514534 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:52:37.514846 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:52:37.538169 jq[1502]: false Aug 13 00:52:37.538443 jq[1518]: true Aug 13 00:52:37.538939 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:52:37.539226 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:52:37.565314 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:52:37.565618 systemd[1]: Finished motdgen.service. Aug 13 00:52:37.579796 jq[1528]: true Aug 13 00:52:37.611488 extend-filesystems[1503]: Found loop1 Aug 13 00:52:37.614237 extend-filesystems[1503]: Found sda Aug 13 00:52:37.614237 extend-filesystems[1503]: Found sda1 Aug 13 00:52:37.618794 extend-filesystems[1503]: Found sda2 Aug 13 00:52:37.618794 extend-filesystems[1503]: Found sda3 Aug 13 00:52:37.618794 extend-filesystems[1503]: Found usr Aug 13 00:52:37.618794 extend-filesystems[1503]: Found sda4 Aug 13 00:52:37.618794 extend-filesystems[1503]: Found sda6 Aug 13 00:52:37.618794 extend-filesystems[1503]: Found sda7 Aug 13 00:52:37.618794 extend-filesystems[1503]: Found sda9 Aug 13 00:52:37.618794 extend-filesystems[1503]: Checking size of /dev/sda9 Aug 13 00:52:37.642743 env[1532]: time="2025-08-13T00:52:37.642693584Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:52:37.672430 env[1532]: time="2025-08-13T00:52:37.672389809Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:52:37.672756 env[1532]: time="2025-08-13T00:52:37.672731348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:37.674344 env[1532]: time="2025-08-13T00:52:37.674312130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:52:37.674444 env[1532]: time="2025-08-13T00:52:37.674428844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:37.674831 env[1532]: time="2025-08-13T00:52:37.674804987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:52:37.674948 env[1532]: time="2025-08-13T00:52:37.674932602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:37.675031 env[1532]: time="2025-08-13T00:52:37.675015611Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:52:37.675089 env[1532]: time="2025-08-13T00:52:37.675077118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:37.675246 env[1532]: time="2025-08-13T00:52:37.675229636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:37.675562 env[1532]: time="2025-08-13T00:52:37.675542272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:37.675905 env[1532]: time="2025-08-13T00:52:37.675879311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:52:37.676000 env[1532]: time="2025-08-13T00:52:37.675985523Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:52:37.676120 env[1532]: time="2025-08-13T00:52:37.676103437Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:52:37.676191 env[1532]: time="2025-08-13T00:52:37.676179145Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:52:37.694816 env[1532]: time="2025-08-13T00:52:37.694744586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:52:37.694816 env[1532]: time="2025-08-13T00:52:37.694777290Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:52:37.694816 env[1532]: time="2025-08-13T00:52:37.694795692Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.694832796Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.694852199Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.694891503Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.694910005Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.694928407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.694945309Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.694960811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.694978713Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.694993415Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.695116429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:52:37.695295 env[1532]: time="2025-08-13T00:52:37.695207240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:52:37.695673 env[1532]: time="2025-08-13T00:52:37.695605285Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:52:37.695673 env[1532]: time="2025-08-13T00:52:37.695639189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.695673 env[1532]: time="2025-08-13T00:52:37.695657992Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:52:37.695784 env[1532]: time="2025-08-13T00:52:37.695709497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.695784 env[1532]: time="2025-08-13T00:52:37.695728800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.695784 env[1532]: time="2025-08-13T00:52:37.695747002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.695784 env[1532]: time="2025-08-13T00:52:37.695763804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.695942 env[1532]: time="2025-08-13T00:52:37.695780906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.695942 env[1532]: time="2025-08-13T00:52:37.695797508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.695942 env[1532]: time="2025-08-13T00:52:37.695813910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.695942 env[1532]: time="2025-08-13T00:52:37.695830511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.695942 env[1532]: time="2025-08-13T00:52:37.695850614Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:52:37.697562 env[1532]: time="2025-08-13T00:52:37.696259561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.697562 env[1532]: time="2025-08-13T00:52:37.696297465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.697562 env[1532]: time="2025-08-13T00:52:37.696318568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.697562 env[1532]: time="2025-08-13T00:52:37.696334070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:52:37.697562 env[1532]: time="2025-08-13T00:52:37.696352872Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:52:37.697562 env[1532]: time="2025-08-13T00:52:37.696366273Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:52:37.697562 env[1532]: time="2025-08-13T00:52:37.696394576Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:52:37.697562 env[1532]: time="2025-08-13T00:52:37.696431581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:52:37.697903 tar[1521]: linux-amd64/helm Aug 13 00:52:37.698204 env[1532]: time="2025-08-13T00:52:37.696662607Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:52:37.698204 env[1532]: time="2025-08-13T00:52:37.696729815Z" level=info msg="Connect containerd service" Aug 13 00:52:37.698204 env[1532]: time="2025-08-13T00:52:37.696764219Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.698497619Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.698755149Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.698798954Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.699396923Z" level=info msg="Start subscribing containerd event" Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.699455429Z" level=info msg="Start recovering state" Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.699508336Z" level=info msg="Start event monitor" Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.699537039Z" level=info msg="Start snapshots syncer" Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.699546940Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.699554141Z" level=info msg="Start streaming server" Aug 13 00:52:37.736835 env[1532]: time="2025-08-13T00:52:37.723293078Z" level=info msg="containerd successfully booted in 0.081528s" Aug 13 00:52:37.698972 systemd[1]: Started containerd.service. Aug 13 00:52:37.738183 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:52:37.741453 systemd-logind[1516]: New seat seat0. Aug 13 00:52:37.752959 extend-filesystems[1503]: Old size kept for /dev/sda9 Aug 13 00:52:37.755467 extend-filesystems[1503]: Found sr0 Aug 13 00:52:37.762483 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:52:37.762805 systemd[1]: Finished extend-filesystems.service. Aug 13 00:52:37.779341 bash[1553]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:52:37.780292 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:52:37.869065 systemd[1]: nvidia.service: Deactivated successfully. Aug 13 00:52:38.252394 dbus-daemon[1500]: [system] SELinux support is enabled Aug 13 00:52:38.252651 systemd[1]: Started dbus.service. Aug 13 00:52:38.257613 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:52:38.257641 systemd[1]: Reached target system-config.target. Aug 13 00:52:38.260156 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:52:38.260177 systemd[1]: Reached target user-config.target. Aug 13 00:52:38.264068 systemd[1]: Started systemd-logind.service. Aug 13 00:52:38.389687 tar[1521]: linux-amd64/LICENSE Aug 13 00:52:38.389951 tar[1521]: linux-amd64/README.md Aug 13 00:52:38.401961 systemd[1]: Finished prepare-helm.service. Aug 13 00:52:38.554480 update_engine[1517]: I0813 00:52:38.537585 1517 main.cc:92] Flatcar Update Engine starting Aug 13 00:52:38.611522 systemd[1]: Started update-engine.service. Aug 13 00:52:38.616065 update_engine[1517]: I0813 00:52:38.611575 1517 update_check_scheduler.cc:74] Next update check in 8m6s Aug 13 00:52:38.616765 systemd[1]: Started locksmithd.service. Aug 13 00:52:39.049358 systemd[1]: Started kubelet.service. Aug 13 00:52:39.689189 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:52:39.711477 systemd[1]: Finished sshd-keygen.service. Aug 13 00:52:39.716772 systemd[1]: Starting issuegen.service... Aug 13 00:52:39.720537 systemd[1]: Started waagent.service. Aug 13 00:52:39.730721 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:52:39.731029 systemd[1]: Finished issuegen.service. Aug 13 00:52:39.734659 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:52:39.756964 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:52:39.761393 systemd[1]: Started getty@tty1.service. Aug 13 00:52:39.768213 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:52:39.771022 systemd[1]: Reached target getty.target. Aug 13 00:52:39.773132 systemd[1]: Reached target multi-user.target. Aug 13 00:52:39.776666 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:52:39.786078 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:52:39.786330 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:52:39.801372 systemd[1]: Startup finished in 956ms (firmware) + 32.159s (loader) + 17.695s (kernel) + 36.153s (userspace) = 1min 26.965s. Aug 13 00:52:39.827920 kubelet[1622]: E0813 00:52:39.827893 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:52:39.829410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:52:39.829596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:52:40.209067 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:52:40.585324 login[1648]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:52:40.587412 login[1649]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:52:40.754202 systemd[1]: Created slice user-500.slice. Aug 13 00:52:40.755635 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:52:40.757729 systemd-logind[1516]: New session 2 of user core. Aug 13 00:52:40.761539 systemd-logind[1516]: New session 1 of user core. Aug 13 00:52:40.797495 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:52:40.799205 systemd[1]: Starting user@500.service... Aug 13 00:52:40.833603 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:41.533103 systemd[1657]: Queued start job for default target default.target. Aug 13 00:52:41.533484 systemd[1657]: Reached target paths.target. Aug 13 00:52:41.533513 systemd[1657]: Reached target sockets.target. Aug 13 00:52:41.533537 systemd[1657]: Reached target timers.target. Aug 13 00:52:41.533558 systemd[1657]: Reached target basic.target. Aug 13 00:52:41.533639 systemd[1657]: Reached target default.target. Aug 13 00:52:41.533684 systemd[1657]: Startup finished in 692ms. Aug 13 00:52:41.533753 systemd[1]: Started user@500.service. Aug 13 00:52:41.535429 systemd[1]: Started session-1.scope. Aug 13 00:52:41.536422 systemd[1]: Started session-2.scope. Aug 13 00:52:49.921089 waagent[1641]: 2025-08-13T00:52:49.920961Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Aug 13 00:52:49.956380 waagent[1641]: 2025-08-13T00:52:49.956305Z INFO Daemon Daemon OS: flatcar 3510.3.8 Aug 13 00:52:49.958583 waagent[1641]: 2025-08-13T00:52:49.958525Z INFO Daemon Daemon Python: 3.9.16 Aug 13 00:52:49.960803 waagent[1641]: 2025-08-13T00:52:49.960733Z INFO Daemon Daemon Run daemon Aug 13 00:52:49.969477 waagent[1641]: 2025-08-13T00:52:49.962939Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Aug 13 00:52:49.963742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:52:49.963929 systemd[1]: Stopped kubelet.service. Aug 13 00:52:49.965516 systemd[1]: Starting kubelet.service... Aug 13 00:52:50.011148 waagent[1641]: 2025-08-13T00:52:50.010975Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:52:50.015026 waagent[1641]: 2025-08-13T00:52:50.014916Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:52:50.015969 waagent[1641]: 2025-08-13T00:52:50.015917Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:52:50.016571 waagent[1641]: 2025-08-13T00:52:50.016521Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:52:50.017965 waagent[1641]: 2025-08-13T00:52:50.017911Z INFO Daemon Daemon Activate resource disk Aug 13 00:52:50.018689 waagent[1641]: 2025-08-13T00:52:50.018640Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:52:50.026720 waagent[1641]: 2025-08-13T00:52:50.026666Z INFO Daemon Daemon Found device: None Aug 13 00:52:50.027604 waagent[1641]: 2025-08-13T00:52:50.027553Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:52:50.028342 waagent[1641]: 2025-08-13T00:52:50.028295Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:52:50.029961 waagent[1641]: 2025-08-13T00:52:50.029908Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:52:50.030719 waagent[1641]: 2025-08-13T00:52:50.030671Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:52:50.039794 waagent[1641]: 2025-08-13T00:52:50.039695Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:52:50.042509 waagent[1641]: 2025-08-13T00:52:50.042411Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:52:50.043382 waagent[1641]: 2025-08-13T00:52:50.043330Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:52:50.044098 waagent[1641]: 2025-08-13T00:52:50.044050Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:52:50.833056 systemd[1]: Started kubelet.service. Aug 13 00:52:50.883761 waagent[1641]: 2025-08-13T00:52:50.882028Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:52:50.889078 kubelet[1697]: E0813 00:52:50.887204 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:52:50.892076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:52:50.892273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:52:50.967312 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:52:51.004414 waagent[1641]: 2025-08-13T00:52:51.004260Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:52:51.007270 waagent[1641]: 2025-08-13T00:52:51.007194Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:52:51.009980 waagent[1641]: 2025-08-13T00:52:51.009923Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:52:51.013070 waagent[1641]: 2025-08-13T00:52:51.013011Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:52:51.015623 waagent[1641]: 2025-08-13T00:52:51.015564Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:52:51.017936 waagent[1641]: 2025-08-13T00:52:51.017882Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:52:51.190964 waagent[1641]: 2025-08-13T00:52:51.190822Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:52:51.194925 waagent[1641]: 2025-08-13T00:52:51.194873Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:52:51.197715 waagent[1641]: 2025-08-13T00:52:51.197657Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:52:52.480088 waagent[1641]: 2025-08-13T00:52:52.479932Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:52:52.535459 waagent[1641]: 2025-08-13T00:52:52.535363Z INFO Daemon Daemon Forcing an update of the goal state.. Aug 13 00:52:52.540256 waagent[1641]: 2025-08-13T00:52:52.536842Z INFO Daemon Daemon Fetching goal state [incarnation 1] Aug 13 00:52:52.605573 waagent[1641]: 2025-08-13T00:52:52.605450Z INFO Daemon Daemon Found private key matching thumbprint 07832DA34D248773205AAE7C838E3801DBD22C60 Aug 13 00:52:52.610473 waagent[1641]: 2025-08-13T00:52:52.610397Z INFO Daemon Daemon Fetch goal state completed Aug 13 00:52:52.632595 waagent[1641]: 2025-08-13T00:52:52.632527Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: ef686ce0-a105-4b38-a21f-938f39bdb256 New eTag: 7457520573093855595] Aug 13 00:52:52.637275 waagent[1641]: 2025-08-13T00:52:52.637207Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:52:52.648418 waagent[1641]: 2025-08-13T00:52:52.648352Z INFO Daemon Daemon Starting provisioning Aug 13 00:52:52.650724 waagent[1641]: 2025-08-13T00:52:52.650660Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:52:52.652776 waagent[1641]: 2025-08-13T00:52:52.652718Z INFO Daemon Daemon Set hostname [ci-3510.3.8-a-1859c445b4] Aug 13 00:52:52.674724 waagent[1641]: 2025-08-13T00:52:52.674585Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-a-1859c445b4] Aug 13 00:52:52.681815 waagent[1641]: 2025-08-13T00:52:52.676350Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:52:52.681815 waagent[1641]: 2025-08-13T00:52:52.677598Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:52:52.691046 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Aug 13 00:52:52.691371 systemd[1]: Stopped systemd-networkd-wait-online.service. Aug 13 00:52:52.691454 systemd[1]: Stopping systemd-networkd-wait-online.service... Aug 13 00:52:52.691743 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:52:52.696901 systemd-networkd[1244]: eth0: DHCPv6 lease lost Aug 13 00:52:52.698172 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:52:52.698375 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:52:52.700687 systemd[1]: Starting systemd-networkd.service... Aug 13 00:52:52.739852 systemd-networkd[1717]: enP29208s1: Link UP Aug 13 00:52:52.739871 systemd-networkd[1717]: enP29208s1: Gained carrier Aug 13 00:52:52.741247 systemd-networkd[1717]: eth0: Link UP Aug 13 00:52:52.741256 systemd-networkd[1717]: eth0: Gained carrier Aug 13 00:52:52.741673 systemd-networkd[1717]: lo: Link UP Aug 13 00:52:52.741681 systemd-networkd[1717]: lo: Gained carrier Aug 13 00:52:52.741997 systemd-networkd[1717]: eth0: Gained IPv6LL Aug 13 00:52:52.743096 systemd-networkd[1717]: Enumeration completed Aug 13 00:52:52.743225 systemd[1]: Started systemd-networkd.service. Aug 13 00:52:52.747661 waagent[1641]: 2025-08-13T00:52:52.744483Z INFO Daemon Daemon Create user account if not exists Aug 13 00:52:52.746200 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:52:52.751936 waagent[1641]: 2025-08-13T00:52:52.749776Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:52:52.751936 waagent[1641]: 2025-08-13T00:52:52.751037Z INFO Daemon Daemon Configure sudoer Aug 13 00:52:52.752684 systemd-networkd[1717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:52:52.778317 waagent[1641]: 2025-08-13T00:52:52.778233Z INFO Daemon Daemon Configure sshd Aug 13 00:52:52.782321 waagent[1641]: 2025-08-13T00:52:52.780191Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:52:52.787917 systemd-networkd[1717]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 00:52:52.790537 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:52:53.961229 waagent[1641]: 2025-08-13T00:52:53.961136Z INFO Daemon Daemon Provisioning complete Aug 13 00:52:53.975493 waagent[1641]: 2025-08-13T00:52:53.975416Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:52:53.978347 waagent[1641]: 2025-08-13T00:52:53.978279Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:52:53.983239 waagent[1641]: 2025-08-13T00:52:53.983175Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Aug 13 00:52:54.246157 waagent[1724]: 2025-08-13T00:52:54.245980Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Aug 13 00:52:54.246899 waagent[1724]: 2025-08-13T00:52:54.246818Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:54.247066 waagent[1724]: 2025-08-13T00:52:54.247013Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:54.257805 waagent[1724]: 2025-08-13T00:52:54.257730Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Aug 13 00:52:54.257979 waagent[1724]: 2025-08-13T00:52:54.257926Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Aug 13 00:52:54.308116 waagent[1724]: 2025-08-13T00:52:54.307989Z INFO ExtHandler ExtHandler Found private key matching thumbprint 07832DA34D248773205AAE7C838E3801DBD22C60 Aug 13 00:52:54.308402 waagent[1724]: 2025-08-13T00:52:54.308345Z INFO ExtHandler ExtHandler Fetch goal state completed Aug 13 00:52:54.321154 waagent[1724]: 2025-08-13T00:52:54.321096Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 42b6c9fe-d33a-4aee-b55e-cd9a26f5b29c New eTag: 7457520573093855595] Aug 13 00:52:54.321651 waagent[1724]: 2025-08-13T00:52:54.321596Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:52:54.450058 waagent[1724]: 2025-08-13T00:52:54.449904Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:52:54.476656 waagent[1724]: 2025-08-13T00:52:54.476569Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1724 Aug 13 00:52:54.480026 waagent[1724]: 2025-08-13T00:52:54.479962Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:52:54.481192 waagent[1724]: 2025-08-13T00:52:54.481135Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:52:54.617807 waagent[1724]: 2025-08-13T00:52:54.617723Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:52:54.618230 waagent[1724]: 2025-08-13T00:52:54.618167Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:52:54.626218 waagent[1724]: 2025-08-13T00:52:54.626162Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:52:54.626674 waagent[1724]: 2025-08-13T00:52:54.626616Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:52:54.627718 waagent[1724]: 2025-08-13T00:52:54.627655Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Aug 13 00:52:54.628986 waagent[1724]: 2025-08-13T00:52:54.628928Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:52:54.629379 waagent[1724]: 2025-08-13T00:52:54.629325Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:54.629528 waagent[1724]: 2025-08-13T00:52:54.629479Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:54.630070 waagent[1724]: 2025-08-13T00:52:54.630017Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:52:54.630523 waagent[1724]: 2025-08-13T00:52:54.630468Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:52:54.630953 waagent[1724]: 2025-08-13T00:52:54.630893Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:52:54.630953 waagent[1724]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:52:54.630953 waagent[1724]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:52:54.630953 waagent[1724]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:52:54.630953 waagent[1724]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:54.630953 waagent[1724]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:54.630953 waagent[1724]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:54.631272 waagent[1724]: 2025-08-13T00:52:54.631221Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:54.634164 waagent[1724]: 2025-08-13T00:52:54.633997Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:52:54.634293 waagent[1724]: 2025-08-13T00:52:54.634239Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:52:54.635067 waagent[1724]: 2025-08-13T00:52:54.635005Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:52:54.635424 waagent[1724]: 2025-08-13T00:52:54.635357Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:52:54.637690 waagent[1724]: 2025-08-13T00:52:54.637634Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:54.637968 waagent[1724]: 2025-08-13T00:52:54.637850Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:52:54.638843 waagent[1724]: 2025-08-13T00:52:54.638784Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:52:54.639629 waagent[1724]: 2025-08-13T00:52:54.639575Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:52:54.639762 waagent[1724]: 2025-08-13T00:52:54.639716Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:52:54.648092 waagent[1724]: 2025-08-13T00:52:54.648043Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Aug 13 00:52:54.648875 waagent[1724]: 2025-08-13T00:52:54.648812Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:52:54.651551 waagent[1724]: 2025-08-13T00:52:54.651498Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Aug 13 00:52:54.672243 waagent[1724]: 2025-08-13T00:52:54.672161Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Aug 13 00:52:54.696004 waagent[1724]: 2025-08-13T00:52:54.695891Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1717' Aug 13 00:52:54.835866 waagent[1724]: 2025-08-13T00:52:54.835784Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:52:54.835866 waagent[1724]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:52:54.835866 waagent[1724]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:52:54.835866 waagent[1724]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:f1:b7:72 brd ff:ff:ff:ff:ff:ff Aug 13 00:52:54.835866 waagent[1724]: 3: enP29208s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:f1:b7:72 brd ff:ff:ff:ff:ff:ff\ altname enP29208p0s2 Aug 13 00:52:54.835866 waagent[1724]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:52:54.835866 waagent[1724]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:52:54.835866 waagent[1724]: 2: eth0 inet 10.200.4.17/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:52:54.835866 waagent[1724]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:52:54.835866 waagent[1724]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:52:54.835866 waagent[1724]: 2: eth0 inet6 fe80::6245:bdff:fef1:b772/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:52:54.955672 waagent[1724]: 2025-08-13T00:52:54.955543Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Aug 13 00:52:54.986127 waagent[1641]: 2025-08-13T00:52:54.986005Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Aug 13 00:52:54.991781 waagent[1641]: 2025-08-13T00:52:54.991722Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Aug 13 00:52:56.237484 waagent[1753]: 2025-08-13T00:52:56.237382Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Aug 13 00:52:56.238249 waagent[1753]: 2025-08-13T00:52:56.238180Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Aug 13 00:52:56.238409 waagent[1753]: 2025-08-13T00:52:56.238358Z INFO ExtHandler ExtHandler Python: 3.9.16 Aug 13 00:52:56.238563 waagent[1753]: 2025-08-13T00:52:56.238516Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Aug 13 00:52:56.253645 waagent[1753]: 2025-08-13T00:52:56.253542Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:52:56.254070 waagent[1753]: 2025-08-13T00:52:56.254013Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:56.254249 waagent[1753]: 2025-08-13T00:52:56.254200Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:56.254490 waagent[1753]: 2025-08-13T00:52:56.254439Z INFO ExtHandler ExtHandler Initializing the goal state... Aug 13 00:52:56.266533 waagent[1753]: 2025-08-13T00:52:56.266458Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:52:56.274774 waagent[1753]: 2025-08-13T00:52:56.274713Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:52:56.275677 waagent[1753]: 2025-08-13T00:52:56.275618Z INFO ExtHandler Aug 13 00:52:56.275840 waagent[1753]: 2025-08-13T00:52:56.275790Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: bb2b8f56-7b27-4218-b9bc-9caf99f0bff9 eTag: 7457520573093855595 source: Fabric] Aug 13 00:52:56.276582 waagent[1753]: 2025-08-13T00:52:56.276525Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:52:56.277676 waagent[1753]: 2025-08-13T00:52:56.277616Z INFO ExtHandler Aug 13 00:52:56.277824 waagent[1753]: 2025-08-13T00:52:56.277775Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:52:56.283983 waagent[1753]: 2025-08-13T00:52:56.283931Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:52:56.284440 waagent[1753]: 2025-08-13T00:52:56.284391Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:52:56.304622 waagent[1753]: 2025-08-13T00:52:56.304558Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Aug 13 00:52:56.358078 waagent[1753]: 2025-08-13T00:52:56.357969Z INFO ExtHandler Downloaded certificate {'thumbprint': '07832DA34D248773205AAE7C838E3801DBD22C60', 'hasPrivateKey': True} Aug 13 00:52:56.359268 waagent[1753]: 2025-08-13T00:52:56.359204Z INFO ExtHandler Fetch goal state from WireServer completed Aug 13 00:52:56.360090 waagent[1753]: 2025-08-13T00:52:56.360031Z INFO ExtHandler ExtHandler Goal state initialization completed. Aug 13 00:52:56.378223 waagent[1753]: 2025-08-13T00:52:56.378129Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Aug 13 00:52:56.385916 waagent[1753]: 2025-08-13T00:52:56.385813Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:52:56.389327 waagent[1753]: 2025-08-13T00:52:56.389234Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Aug 13 00:52:56.389538 waagent[1753]: 2025-08-13T00:52:56.389486Z INFO ExtHandler ExtHandler Checking state of the firewall Aug 13 00:52:56.554308 waagent[1753]: 2025-08-13T00:52:56.554136Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Aug 13 00:52:56.554308 waagent[1753]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:52:56.554308 waagent[1753]: pkts bytes target prot opt in out source destination Aug 13 00:52:56.554308 waagent[1753]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:52:56.554308 waagent[1753]: pkts bytes target prot opt in out source destination Aug 13 00:52:56.554308 waagent[1753]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:52:56.554308 waagent[1753]: pkts bytes target prot opt in out source destination Aug 13 00:52:56.554308 waagent[1753]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:52:56.554308 waagent[1753]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:52:56.554308 waagent[1753]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:52:56.555393 waagent[1753]: 2025-08-13T00:52:56.555327Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Aug 13 00:52:56.558012 waagent[1753]: 2025-08-13T00:52:56.557914Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Aug 13 00:52:56.558277 waagent[1753]: 2025-08-13T00:52:56.558226Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:52:56.558622 waagent[1753]: 2025-08-13T00:52:56.558568Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:52:56.566696 waagent[1753]: 2025-08-13T00:52:56.566637Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:52:56.567198 waagent[1753]: 2025-08-13T00:52:56.567143Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:52:56.574467 waagent[1753]: 2025-08-13T00:52:56.574407Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1753 Aug 13 00:52:56.577493 waagent[1753]: 2025-08-13T00:52:56.577431Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:52:56.578248 waagent[1753]: 2025-08-13T00:52:56.578186Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Aug 13 00:52:56.579074 waagent[1753]: 2025-08-13T00:52:56.579018Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Aug 13 00:52:56.581550 waagent[1753]: 2025-08-13T00:52:56.581490Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Aug 13 00:52:56.581890 waagent[1753]: 2025-08-13T00:52:56.581823Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Aug 13 00:52:56.583135 waagent[1753]: 2025-08-13T00:52:56.583077Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:52:56.583568 waagent[1753]: 2025-08-13T00:52:56.583513Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:56.583733 waagent[1753]: 2025-08-13T00:52:56.583686Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:56.584250 waagent[1753]: 2025-08-13T00:52:56.584199Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:52:56.584564 waagent[1753]: 2025-08-13T00:52:56.584512Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:52:56.584564 waagent[1753]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:52:56.584564 waagent[1753]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:52:56.584564 waagent[1753]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:52:56.584564 waagent[1753]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:56.584564 waagent[1753]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:56.584564 waagent[1753]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:56.586960 waagent[1753]: 2025-08-13T00:52:56.586845Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:52:56.587946 waagent[1753]: 2025-08-13T00:52:56.587892Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:56.588103 waagent[1753]: 2025-08-13T00:52:56.588054Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:52:56.588225 waagent[1753]: 2025-08-13T00:52:56.588168Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:52:56.588716 waagent[1753]: 2025-08-13T00:52:56.588665Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:56.589798 waagent[1753]: 2025-08-13T00:52:56.589750Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:52:56.591565 waagent[1753]: 2025-08-13T00:52:56.591462Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:52:56.591839 waagent[1753]: 2025-08-13T00:52:56.591782Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:52:56.593368 waagent[1753]: 2025-08-13T00:52:56.593315Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:52:56.593719 waagent[1753]: 2025-08-13T00:52:56.593660Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:52:56.596151 waagent[1753]: 2025-08-13T00:52:56.596075Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:52:56.615488 waagent[1753]: 2025-08-13T00:52:56.615396Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:52:56.615488 waagent[1753]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:52:56.615488 waagent[1753]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:52:56.615488 waagent[1753]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:f1:b7:72 brd ff:ff:ff:ff:ff:ff Aug 13 00:52:56.615488 waagent[1753]: 3: enP29208s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:f1:b7:72 brd ff:ff:ff:ff:ff:ff\ altname enP29208p0s2 Aug 13 00:52:56.615488 waagent[1753]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:52:56.615488 waagent[1753]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:52:56.615488 waagent[1753]: 2: eth0 inet 10.200.4.17/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:52:56.615488 waagent[1753]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:52:56.615488 waagent[1753]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:52:56.615488 waagent[1753]: 2: eth0 inet6 fe80::6245:bdff:fef1:b772/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:52:56.617665 waagent[1753]: 2025-08-13T00:52:56.617604Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:52:56.618652 waagent[1753]: 2025-08-13T00:52:56.618596Z INFO ExtHandler ExtHandler Downloading agent manifest Aug 13 00:52:56.642288 waagent[1753]: 2025-08-13T00:52:56.642225Z INFO ExtHandler ExtHandler Aug 13 00:52:56.643493 waagent[1753]: 2025-08-13T00:52:56.643437Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d6ad8edc-96ab-442c-b040-eef9a1e78ce7 correlation 339145e6-c420-438f-b041-59fad9752cd1 created: 2025-08-13T00:51:02.225410Z] Aug 13 00:52:56.645810 waagent[1753]: 2025-08-13T00:52:56.645764Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:52:56.647623 waagent[1753]: 2025-08-13T00:52:56.647576Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 5 ms] Aug 13 00:52:56.670637 waagent[1753]: 2025-08-13T00:52:56.670573Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 00:52:56.673573 waagent[1753]: 2025-08-13T00:52:56.673512Z INFO ExtHandler ExtHandler Looking for existing remote access users. Aug 13 00:52:56.677214 waagent[1753]: 2025-08-13T00:52:56.677157Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D8A9CCE8-0383-4B22-BB1E-A42A9B405FE7;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Aug 13 00:53:01.063020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:53:01.063343 systemd[1]: Stopped kubelet.service. Aug 13 00:53:01.065514 systemd[1]: Starting kubelet.service... Aug 13 00:53:01.164835 systemd[1]: Started kubelet.service. Aug 13 00:53:01.866880 kubelet[1802]: E0813 00:53:01.866822 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:01.868803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:01.869026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:12.063051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:53:12.063366 systemd[1]: Stopped kubelet.service. Aug 13 00:53:12.065199 systemd[1]: Starting kubelet.service... Aug 13 00:53:12.162260 systemd[1]: Started kubelet.service. Aug 13 00:53:12.884546 kubelet[1816]: E0813 00:53:12.884496 1816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:12.886085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:12.886224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:13.133777 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 13 00:53:19.292541 systemd[1]: Created slice system-sshd.slice. Aug 13 00:53:19.294558 systemd[1]: Started sshd@0-10.200.4.17:22-10.200.16.10:41966.service. Aug 13 00:53:20.186405 sshd[1823]: Accepted publickey for core from 10.200.16.10 port 41966 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:20.188096 sshd[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:20.191906 systemd-logind[1516]: New session 3 of user core. Aug 13 00:53:20.193031 systemd[1]: Started session-3.scope. Aug 13 00:53:20.703492 systemd[1]: Started sshd@1-10.200.4.17:22-10.200.16.10:44190.service. Aug 13 00:53:21.294321 sshd[1828]: Accepted publickey for core from 10.200.16.10 port 44190 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:21.296388 sshd[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:21.302097 systemd[1]: Started session-4.scope. Aug 13 00:53:21.302810 systemd-logind[1516]: New session 4 of user core. Aug 13 00:53:21.713073 sshd[1828]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:21.716183 systemd[1]: sshd@1-10.200.4.17:22-10.200.16.10:44190.service: Deactivated successfully. Aug 13 00:53:21.717508 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:53:21.717628 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:53:21.719419 systemd-logind[1516]: Removed session 4. Aug 13 00:53:21.816039 systemd[1]: Started sshd@2-10.200.4.17:22-10.200.16.10:44206.service. Aug 13 00:53:22.406403 sshd[1835]: Accepted publickey for core from 10.200.16.10 port 44206 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:22.408103 sshd[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:22.413794 systemd[1]: Started session-5.scope. Aug 13 00:53:22.414065 systemd-logind[1516]: New session 5 of user core. Aug 13 00:53:22.820685 sshd[1835]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:22.824071 systemd[1]: sshd@2-10.200.4.17:22-10.200.16.10:44206.service: Deactivated successfully. Aug 13 00:53:22.825900 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:53:22.826772 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:53:22.827993 systemd-logind[1516]: Removed session 5. Aug 13 00:53:22.916297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:53:22.916534 systemd[1]: Stopped kubelet.service. Aug 13 00:53:22.918656 systemd[1]: Starting kubelet.service... Aug 13 00:53:22.920765 systemd[1]: Started sshd@3-10.200.4.17:22-10.200.16.10:44208.service. Aug 13 00:53:23.024082 systemd[1]: Started kubelet.service. Aug 13 00:53:23.523586 sshd[1843]: Accepted publickey for core from 10.200.16.10 port 44208 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:23.525275 sshd[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:23.529975 systemd[1]: Started session-6.scope. Aug 13 00:53:23.530227 systemd-logind[1516]: New session 6 of user core. Aug 13 00:53:23.681340 kubelet[1852]: E0813 00:53:23.681291 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:23.682975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:23.683187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:23.943674 sshd[1843]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:23.946924 systemd[1]: sshd@3-10.200.4.17:22-10.200.16.10:44208.service: Deactivated successfully. Aug 13 00:53:23.948338 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:53:23.948368 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:53:23.949634 systemd-logind[1516]: Removed session 6. Aug 13 00:53:23.953043 update_engine[1517]: I0813 00:53:23.952332 1517 update_attempter.cc:509] Updating boot flags... Aug 13 00:53:24.041449 systemd[1]: Started sshd@4-10.200.4.17:22-10.200.16.10:44222.service. Aug 13 00:53:24.644406 sshd[1892]: Accepted publickey for core from 10.200.16.10 port 44222 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:24.646129 sshd[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:24.650840 systemd[1]: Started session-7.scope. Aug 13 00:53:24.651167 systemd-logind[1516]: New session 7 of user core. Aug 13 00:53:25.355743 sudo[1907]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:53:25.356138 sudo[1907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:25.379096 dbus-daemon[1500]: \xd0-\xb2\x8d\xafU: received setenforce notice (enforcing=2050743840) Aug 13 00:53:25.381043 sudo[1907]: pam_unix(sudo:session): session closed for user root Aug 13 00:53:25.494770 sshd[1892]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:25.498805 systemd[1]: sshd@4-10.200.4.17:22-10.200.16.10:44222.service: Deactivated successfully. Aug 13 00:53:25.500272 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:53:25.500387 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:53:25.501992 systemd-logind[1516]: Removed session 7. Aug 13 00:53:25.593178 systemd[1]: Started sshd@5-10.200.4.17:22-10.200.16.10:44228.service. Aug 13 00:53:26.186813 sshd[1911]: Accepted publickey for core from 10.200.16.10 port 44228 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:26.188520 sshd[1911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:26.194367 systemd[1]: Started session-8.scope. Aug 13 00:53:26.194674 systemd-logind[1516]: New session 8 of user core. Aug 13 00:53:26.514285 sudo[1916]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:53:26.514801 sudo[1916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:26.517605 sudo[1916]: pam_unix(sudo:session): session closed for user root Aug 13 00:53:26.522258 sudo[1915]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:53:26.522543 sudo[1915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:26.531659 systemd[1]: Stopping audit-rules.service... Aug 13 00:53:26.532000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:53:26.535532 kernel: kauditd_printk_skb: 25 callbacks suppressed Aug 13 00:53:26.535586 kernel: audit: type=1305 audit(1755046406.532:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:53:26.535852 auditctl[1919]: No rules Aug 13 00:53:26.536354 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:53:26.536590 systemd[1]: Stopped audit-rules.service. Aug 13 00:53:26.538281 systemd[1]: Starting audit-rules.service... Aug 13 00:53:26.532000 audit[1919]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffaff0dda0 a2=420 a3=0 items=0 ppid=1 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:26.532000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:53:26.563227 kernel: audit: type=1300 audit(1755046406.532:152): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffaff0dda0 a2=420 a3=0 items=0 ppid=1 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:26.563308 kernel: audit: type=1327 audit(1755046406.532:152): proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:53:26.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.568919 augenrules[1937]: No rules Aug 13 00:53:26.572466 kernel: audit: type=1131 audit(1755046406.535:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.572927 systemd[1]: Finished audit-rules.service. Aug 13 00:53:26.574784 sudo[1915]: pam_unix(sudo:session): session closed for user root Aug 13 00:53:26.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.573000 audit[1915]: USER_END pid=1915 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.585875 kernel: audit: type=1130 audit(1755046406.572:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.585921 kernel: audit: type=1106 audit(1755046406.573:155): pid=1915 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.574000 audit[1915]: CRED_DISP pid=1915 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.606534 kernel: audit: type=1104 audit(1755046406.574:156): pid=1915 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.676226 sshd[1911]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:26.676000 audit[1911]: USER_END pid=1911 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:53:26.684733 systemd[1]: sshd@5-10.200.4.17:22-10.200.16.10:44228.service: Deactivated successfully. Aug 13 00:53:26.685595 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:53:26.687119 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:53:26.688054 systemd-logind[1516]: Removed session 8. Aug 13 00:53:26.676000 audit[1911]: CRED_DISP pid=1911 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:53:26.702435 kernel: audit: type=1106 audit(1755046406.676:157): pid=1911 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:53:26.702508 kernel: audit: type=1104 audit(1755046406.676:158): pid=1911 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:53:26.702540 kernel: audit: type=1131 audit(1755046406.684:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.17:22-10.200.16.10:44228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.17:22-10.200.16.10:44228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:26.776365 systemd[1]: Started sshd@6-10.200.4.17:22-10.200.16.10:44230.service. Aug 13 00:53:26.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.17:22-10.200.16.10:44230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.363000 audit[1944]: USER_ACCT pid=1944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:53:27.364237 sshd[1944]: Accepted publickey for core from 10.200.16.10 port 44230 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:27.364000 audit[1944]: CRED_ACQ pid=1944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:53:27.364000 audit[1944]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbe07b5d0 a2=3 a3=0 items=0 ppid=1 pid=1944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:27.364000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:27.365955 sshd[1944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:27.371455 systemd[1]: Started session-9.scope. Aug 13 00:53:27.371705 systemd-logind[1516]: New session 9 of user core. Aug 13 00:53:27.376000 audit[1944]: USER_START pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:53:27.378000 audit[1947]: CRED_ACQ pid=1947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:53:27.687000 audit[1948]: USER_ACCT pid=1948 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.688000 audit[1948]: CRED_REFR pid=1948 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.688966 sudo[1948]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:53:27.689281 sudo[1948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:27.690000 audit[1948]: USER_START pid=1948 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.729302 systemd[1]: Starting docker.service... Aug 13 00:53:27.790728 env[1958]: time="2025-08-13T00:53:27.790675534Z" level=info msg="Starting up" Aug 13 00:53:27.791993 env[1958]: time="2025-08-13T00:53:27.791968040Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:27.792121 env[1958]: time="2025-08-13T00:53:27.792108140Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:27.792184 env[1958]: time="2025-08-13T00:53:27.792172541Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:27.792231 env[1958]: time="2025-08-13T00:53:27.792223241Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:27.794049 env[1958]: time="2025-08-13T00:53:27.794030649Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:27.794137 env[1958]: time="2025-08-13T00:53:27.794128150Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:27.794192 env[1958]: time="2025-08-13T00:53:27.794182850Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:27.794233 env[1958]: time="2025-08-13T00:53:27.794225950Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:27.801688 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport95195363-merged.mount: Deactivated successfully. Aug 13 00:53:27.905104 env[1958]: time="2025-08-13T00:53:27.905062257Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 00:53:27.905104 env[1958]: time="2025-08-13T00:53:27.905088357Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 00:53:27.905379 env[1958]: time="2025-08-13T00:53:27.905323659Z" level=info msg="Loading containers: start." Aug 13 00:53:27.983000 audit[1984]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:27.983000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe25a8e020 a2=0 a3=7ffe25a8e00c items=0 ppid=1958 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:27.983000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 13 00:53:27.986000 audit[1986]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:27.986000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff555f4070 a2=0 a3=7fff555f405c items=0 ppid=1958 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:27.986000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 13 00:53:27.988000 audit[1988]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:27.988000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd89b506c0 a2=0 a3=7ffd89b506ac items=0 ppid=1958 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:27.988000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:53:27.990000 audit[1990]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:27.990000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffebc5668d0 a2=0 a3=7ffebc5668bc items=0 ppid=1958 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:27.990000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:53:27.991000 audit[1992]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:27.991000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc67d06030 a2=0 a3=7ffc67d0601c items=0 ppid=1958 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:27.991000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 13 00:53:27.993000 audit[1994]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:27.993000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffef76c11f0 a2=0 a3=7ffef76c11dc items=0 ppid=1958 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:27.993000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 13 00:53:28.011000 audit[1996]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.011000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe94a02070 a2=0 a3=7ffe94a0205c items=0 ppid=1958 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.011000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 00:53:28.013000 audit[1998]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.013000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe3ad78a00 a2=0 a3=7ffe3ad789ec items=0 ppid=1958 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.013000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 00:53:28.015000 audit[2000]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.015000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe1a73f460 a2=0 a3=7ffe1a73f44c items=0 ppid=1958 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.015000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:28.031000 audit[2004]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.031000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe8fbbbbb0 a2=0 a3=7ffe8fbbbb9c items=0 ppid=1958 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.031000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:28.037000 audit[2005]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.037000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcb355c6f0 a2=0 a3=7ffcb355c6dc items=0 ppid=1958 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.037000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:28.151892 kernel: Initializing XFRM netlink socket Aug 13 00:53:28.195613 env[1958]: time="2025-08-13T00:53:28.195572931Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:53:28.315000 audit[2012]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.315000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffcf730ab40 a2=0 a3=7ffcf730ab2c items=0 ppid=1958 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.315000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 13 00:53:28.356000 audit[2016]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.356000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff048e9cb0 a2=0 a3=7fff048e9c9c items=0 ppid=1958 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.356000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 13 00:53:28.360000 audit[2019]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.360000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc9020eb70 a2=0 a3=7ffc9020eb5c items=0 ppid=1958 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.360000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 13 00:53:28.362000 audit[2021]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.362000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe4e338b80 a2=0 a3=7ffe4e338b6c items=0 ppid=1958 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.362000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 13 00:53:28.364000 audit[2023]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.364000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffca86e6510 a2=0 a3=7ffca86e64fc items=0 ppid=1958 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.364000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 13 00:53:28.366000 audit[2025]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.366000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffdc43ab880 a2=0 a3=7ffdc43ab86c items=0 ppid=1958 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.366000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 13 00:53:28.368000 audit[2027]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.368000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff42e9b280 a2=0 a3=7fff42e9b26c items=0 ppid=1958 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.368000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 13 00:53:28.370000 audit[2029]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.370000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffcbffad5c0 a2=0 a3=7ffcbffad5ac items=0 ppid=1958 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.370000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 13 00:53:28.372000 audit[2031]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.372000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffea29b4640 a2=0 a3=7ffea29b462c items=0 ppid=1958 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.372000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:53:28.374000 audit[2033]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.374000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd12fa0b50 a2=0 a3=7ffd12fa0b3c items=0 ppid=1958 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.374000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:53:28.376000 audit[2035]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.376000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd8fb0ea20 a2=0 a3=7ffd8fb0ea0c items=0 ppid=1958 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.376000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 13 00:53:28.377852 systemd-networkd[1717]: docker0: Link UP Aug 13 00:53:28.396000 audit[2039]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.396000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffd9bf9640 a2=0 a3=7fffd9bf962c items=0 ppid=1958 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.396000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:28.401000 audit[2040]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:28.401000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff40e545b0 a2=0 a3=7fff40e5459c items=0 ppid=1958 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.401000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:28.403131 env[1958]: time="2025-08-13T00:53:28.403101121Z" level=info msg="Loading containers: done." Aug 13 00:53:28.469594 env[1958]: time="2025-08-13T00:53:28.469536406Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:53:28.469882 env[1958]: time="2025-08-13T00:53:28.469813007Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:53:28.470019 env[1958]: time="2025-08-13T00:53:28.469992908Z" level=info msg="Daemon has completed initialization" Aug 13 00:53:28.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:28.516259 systemd[1]: Started docker.service. Aug 13 00:53:28.520252 env[1958]: time="2025-08-13T00:53:28.520211324Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:53:33.098032 env[1532]: time="2025-08-13T00:53:33.097973435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:53:33.812891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:53:33.813136 systemd[1]: Stopped kubelet.service. Aug 13 00:53:33.826602 kernel: kauditd_printk_skb: 84 callbacks suppressed Aug 13 00:53:33.826736 kernel: audit: type=1130 audit(1755046413.812:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:33.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:33.815335 systemd[1]: Starting kubelet.service... Aug 13 00:53:33.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:33.839888 kernel: audit: type=1131 audit(1755046413.812:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:34.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:34.211432 systemd[1]: Started kubelet.service. Aug 13 00:53:34.225887 kernel: audit: type=1130 audit(1755046414.210:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:34.260393 kubelet[2079]: E0813 00:53:34.260326 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:34.262063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:34.262266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:34.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:53:34.273872 kernel: audit: type=1131 audit(1755046414.261:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:53:34.734768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292208072.mount: Deactivated successfully. Aug 13 00:53:36.337822 env[1532]: time="2025-08-13T00:53:36.337762690Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:36.342841 env[1532]: time="2025-08-13T00:53:36.342798367Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:36.346244 env[1532]: time="2025-08-13T00:53:36.346206054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:36.349357 env[1532]: time="2025-08-13T00:53:36.349328126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:36.349981 env[1532]: time="2025-08-13T00:53:36.349949560Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:53:36.350627 env[1532]: time="2025-08-13T00:53:36.350601895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:53:37.999773 env[1532]: time="2025-08-13T00:53:37.999703949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:38.007257 env[1532]: time="2025-08-13T00:53:38.007175640Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:38.016168 env[1532]: time="2025-08-13T00:53:38.016127405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:38.019845 env[1532]: time="2025-08-13T00:53:38.019812697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:38.020546 env[1532]: time="2025-08-13T00:53:38.020515333Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:53:38.021246 env[1532]: time="2025-08-13T00:53:38.021217370Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:53:39.402612 env[1532]: time="2025-08-13T00:53:39.402562830Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:39.409525 env[1532]: time="2025-08-13T00:53:39.409487480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:39.414355 env[1532]: time="2025-08-13T00:53:39.414321924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:39.418837 env[1532]: time="2025-08-13T00:53:39.418809750Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:39.419457 env[1532]: time="2025-08-13T00:53:39.419428082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:53:39.420133 env[1532]: time="2025-08-13T00:53:39.420108116Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:53:40.743123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275173226.mount: Deactivated successfully. Aug 13 00:53:41.382107 env[1532]: time="2025-08-13T00:53:41.382041289Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.397739 env[1532]: time="2025-08-13T00:53:41.397684337Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.405229 env[1532]: time="2025-08-13T00:53:41.405181795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.411162 env[1532]: time="2025-08-13T00:53:41.411128879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.412139 env[1532]: time="2025-08-13T00:53:41.412106226Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:53:41.412622 env[1532]: time="2025-08-13T00:53:41.412595949Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:53:42.067201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222612148.mount: Deactivated successfully. Aug 13 00:53:43.385251 env[1532]: time="2025-08-13T00:53:43.385192425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:43.391017 env[1532]: time="2025-08-13T00:53:43.390979687Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:43.397482 env[1532]: time="2025-08-13T00:53:43.397447279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:43.400612 env[1532]: time="2025-08-13T00:53:43.400576421Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:43.401257 env[1532]: time="2025-08-13T00:53:43.401226350Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:53:43.401766 env[1532]: time="2025-08-13T00:53:43.401738673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:53:43.954421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2676822905.mount: Deactivated successfully. Aug 13 00:53:43.974247 env[1532]: time="2025-08-13T00:53:43.974199965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:43.982028 env[1532]: time="2025-08-13T00:53:43.981993817Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:43.988401 env[1532]: time="2025-08-13T00:53:43.988369806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:43.992458 env[1532]: time="2025-08-13T00:53:43.992430289Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:43.992875 env[1532]: time="2025-08-13T00:53:43.992834607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:53:43.993758 env[1532]: time="2025-08-13T00:53:43.993728048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:53:44.313022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Aug 13 00:53:44.327637 kernel: audit: type=1130 audit(1755046424.312:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:44.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:44.313262 systemd[1]: Stopped kubelet.service. Aug 13 00:53:44.315011 systemd[1]: Starting kubelet.service... Aug 13 00:53:44.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:44.342869 kernel: audit: type=1131 audit(1755046424.312:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:44.422971 systemd[1]: Started kubelet.service. Aug 13 00:53:44.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:44.439896 kernel: audit: type=1130 audit(1755046424.422:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.089313 kubelet[2095]: E0813 00:53:45.089262 2095 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:45.090933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:45.091165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:45.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:53:45.103880 kernel: audit: type=1131 audit(1755046425.090:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:53:45.449617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919477152.mount: Deactivated successfully. Aug 13 00:53:47.960529 env[1532]: time="2025-08-13T00:53:47.960468247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:47.965280 env[1532]: time="2025-08-13T00:53:47.965229740Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:47.970755 env[1532]: time="2025-08-13T00:53:47.970711563Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:47.976666 env[1532]: time="2025-08-13T00:53:47.976630003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:47.977409 env[1532]: time="2025-08-13T00:53:47.977371933Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:53:51.010835 systemd[1]: Stopped kubelet.service. Aug 13 00:53:51.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.016238 systemd[1]: Starting kubelet.service... Aug 13 00:53:51.025968 kernel: audit: type=1130 audit(1755046431.010:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.048967 kernel: audit: type=1131 audit(1755046431.013:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.067510 systemd[1]: Reloading. Aug 13 00:53:51.174150 /usr/lib/systemd/system-generators/torcx-generator[2147]: time="2025-08-13T00:53:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:53:51.174611 /usr/lib/systemd/system-generators/torcx-generator[2147]: time="2025-08-13T00:53:51Z" level=info msg="torcx already run" Aug 13 00:53:51.276453 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:51.276474 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:51.293037 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:51.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.424240 systemd[1]: Stopping kubelet.service... Aug 13 00:53:51.426887 kernel: audit: type=1130 audit(1755046431.412:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.761257 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:53:51.761652 systemd[1]: Stopped kubelet.service. Aug 13 00:53:51.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.765117 systemd[1]: Starting kubelet.service... Aug 13 00:53:51.779885 kernel: audit: type=1131 audit(1755046431.760:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:52.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:52.285572 systemd[1]: Started kubelet.service. Aug 13 00:53:52.300890 kernel: audit: type=1130 audit(1755046432.285:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:52.558901 kubelet[2236]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:53:52.558901 kubelet[2236]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:53:52.558901 kubelet[2236]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:53:52.559372 kubelet[2236]: I0813 00:53:52.559166 2236 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:53:52.827581 kubelet[2236]: I0813 00:53:52.827545 2236 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:53:52.827581 kubelet[2236]: I0813 00:53:52.827575 2236 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:53:52.827914 kubelet[2236]: I0813 00:53:52.827895 2236 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:53:52.871391 kubelet[2236]: E0813 00:53:52.871357 2236 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:52.878324 kubelet[2236]: I0813 00:53:52.878293 2236 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:53:52.886171 kubelet[2236]: E0813 00:53:52.886138 2236 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:53:52.886171 kubelet[2236]: I0813 00:53:52.886171 2236 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:53:52.890707 kubelet[2236]: I0813 00:53:52.890688 2236 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:53:52.891746 kubelet[2236]: I0813 00:53:52.891724 2236 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:53:52.891932 kubelet[2236]: I0813 00:53:52.891907 2236 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:53:52.892113 kubelet[2236]: I0813 00:53:52.891930 2236 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-1859c445b4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:53:52.892258 kubelet[2236]: I0813 00:53:52.892125 2236 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:53:52.892258 kubelet[2236]: I0813 00:53:52.892138 2236 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:53:52.892258 kubelet[2236]: I0813 00:53:52.892249 2236 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:53:52.904045 kubelet[2236]: I0813 00:53:52.904022 2236 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:53:52.904133 kubelet[2236]: I0813 00:53:52.904050 2236 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:53:52.904133 kubelet[2236]: I0813 00:53:52.904089 2236 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:53:52.904133 kubelet[2236]: I0813 00:53:52.904119 2236 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:53:52.919018 kubelet[2236]: I0813 00:53:52.919003 2236 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:53:52.919600 kubelet[2236]: I0813 00:53:52.919575 2236 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:53:52.925981 kubelet[2236]: W0813 00:53:52.925959 2236 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:53:52.927649 kubelet[2236]: W0813 00:53:52.927606 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 13 00:53:52.927803 kubelet[2236]: E0813 00:53:52.927782 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:52.928008 kubelet[2236]: W0813 00:53:52.927969 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-1859c445b4&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 13 00:53:52.928110 kubelet[2236]: I0813 00:53:52.928079 2236 server.go:1274] "Started kubelet" Aug 13 00:53:52.928193 kubelet[2236]: E0813 00:53:52.928170 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-1859c445b4&limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:52.928493 kubelet[2236]: I0813 00:53:52.928465 2236 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:53:52.930035 kubelet[2236]: I0813 00:53:52.929423 2236 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:53:52.932000 audit[2236]: AVC avc: denied { mac_admin } for pid=2236 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:52.946381 kubelet[2236]: I0813 00:53:52.934424 2236 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:53:52.946381 kubelet[2236]: I0813 00:53:52.934598 2236 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:53:52.946381 kubelet[2236]: I0813 00:53:52.934906 2236 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:53:52.946381 kubelet[2236]: I0813 00:53:52.934952 2236 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:53:52.946381 kubelet[2236]: I0813 00:53:52.935020 2236 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:53:52.946381 kubelet[2236]: E0813 00:53:52.940967 2236 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-a-1859c445b4.185b2d6e9efe344a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-1859c445b4,UID:ci-3510.3.8-a-1859c445b4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-1859c445b4,},FirstTimestamp:2025-08-13 00:53:52.928052298 +0000 UTC m=+0.628713550,LastTimestamp:2025-08-13 00:53:52.928052298 +0000 UTC m=+0.628713550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-1859c445b4,}" Aug 13 00:53:52.946381 kubelet[2236]: E0813 00:53:52.944661 2236 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:53:52.946751 kubelet[2236]: I0813 00:53:52.945621 2236 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:53:52.946873 kernel: audit: type=1400 audit(1755046432.932:207): avc: denied { mac_admin } for pid=2236 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:52.955259 kernel: audit: type=1401 audit(1755046432.932:207): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:53:52.932000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:53:52.955379 kubelet[2236]: I0813 00:53:52.947845 2236 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:53:52.955379 kubelet[2236]: E0813 00:53:52.948093 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-1859c445b4\" not found" Aug 13 00:53:52.955379 kubelet[2236]: I0813 00:53:52.948324 2236 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:53:52.955379 kubelet[2236]: I0813 00:53:52.948370 2236 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:53:52.955379 kubelet[2236]: W0813 00:53:52.949347 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 13 00:53:52.955379 kubelet[2236]: E0813 00:53:52.949389 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:52.955379 kubelet[2236]: E0813 00:53:52.949444 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-1859c445b4?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="200ms" Aug 13 00:53:52.955379 kubelet[2236]: I0813 00:53:52.951081 2236 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:53:52.955379 kubelet[2236]: I0813 00:53:52.951092 2236 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:53:52.955379 kubelet[2236]: I0813 00:53:52.951162 2236 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:53:52.932000 audit[2236]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009c19e0 a1=c0009c7560 a2=c0009c19b0 a3=25 items=0 ppid=1 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:52.975324 kernel: audit: type=1300 audit(1755046432.932:207): arch=c000003e syscall=188 success=no exit=-22 a0=c0009c19e0 a1=c0009c7560 a2=c0009c19b0 a3=25 items=0 ppid=1 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:52.975442 kernel: audit: type=1327 audit(1755046432.932:207): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:53:52.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:53:52.934000 audit[2236]: AVC avc: denied { mac_admin } for pid=2236 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:52.991897 kernel: audit: type=1400 audit(1755046432.934:208): avc: denied { mac_admin } for pid=2236 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:52.934000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:53:52.934000 audit[2236]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009d9500 a1=c0009c7578 a2=c0009c1a70 a3=25 items=0 ppid=1 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:52.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:53:52.966000 audit[2248]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:52.966000 audit[2248]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe030afcb0 a2=0 a3=7ffe030afc9c items=0 ppid=2236 pid=2248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:52.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:53:52.967000 audit[2249]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:52.967000 audit[2249]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcea02c1e0 a2=0 a3=7ffcea02c1cc items=0 ppid=2236 pid=2249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:52.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:53:52.971000 audit[2251]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:52.971000 audit[2251]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff1115aa80 a2=0 a3=7fff1115aa6c items=0 ppid=2236 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:52.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:53:52.974000 audit[2253]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:52.974000 audit[2253]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcee00e890 a2=0 a3=7ffcee00e87c items=0 ppid=2236 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:52.974000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:53:53.004931 kubelet[2236]: I0813 00:53:53.004895 2236 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:53:53.005037 kubelet[2236]: I0813 00:53:53.004951 2236 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:53:53.005037 kubelet[2236]: I0813 00:53:53.004972 2236 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:53:53.009080 kubelet[2236]: I0813 00:53:53.009065 2236 policy_none.go:49] "None policy: Start" Aug 13 00:53:53.009724 kubelet[2236]: I0813 00:53:53.009709 2236 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:53:53.009806 kubelet[2236]: I0813 00:53:53.009799 2236 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:53:53.016725 kubelet[2236]: I0813 00:53:53.016709 2236 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:53:53.015000 audit[2236]: AVC avc: denied { mac_admin } for pid=2236 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:53.015000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:53:53.015000 audit[2236]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c70990 a1=c000c62948 a2=c000c70960 a3=25 items=0 ppid=1 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:53:53.017061 kubelet[2236]: I0813 00:53:53.017048 2236 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:53:53.017187 kubelet[2236]: I0813 00:53:53.017179 2236 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:53:53.017255 kubelet[2236]: I0813 00:53:53.017233 2236 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:53:53.018375 kubelet[2236]: I0813 00:53:53.018361 2236 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:53:53.022795 kubelet[2236]: E0813 00:53:53.022698 2236 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-a-1859c445b4\" not found" Aug 13 00:53:53.026000 audit[2258]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:53.026000 audit[2258]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd6a3300a0 a2=0 a3=7ffd6a33008c items=0 ppid=2236 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 13 00:53:53.027586 kubelet[2236]: I0813 00:53:53.027550 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:53:53.027000 audit[2260]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:53:53.027000 audit[2260]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe62e1e410 a2=0 a3=7ffe62e1e3fc items=0 ppid=2236 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:53:53.028681 kubelet[2236]: I0813 00:53:53.028634 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:53:53.028681 kubelet[2236]: I0813 00:53:53.028653 2236 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:53:53.028681 kubelet[2236]: I0813 00:53:53.028673 2236 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:53:53.028809 kubelet[2236]: E0813 00:53:53.028720 2236 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 00:53:53.029000 audit[2262]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:53:53.029000 audit[2262]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd89157d90 a2=0 a3=7ffd89157d7c items=0 ppid=2236 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:53:53.031225 kubelet[2236]: W0813 00:53:53.031192 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 13 00:53:53.031366 kubelet[2236]: E0813 00:53:53.031343 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:53.030000 audit[2261]: NETFILTER_CFG table=mangle:36 family=2 entries=1 op=nft_register_chain pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:53.030000 audit[2261]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef4db4dc0 a2=0 a3=7ffef4db4dac items=0 ppid=2236 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:53:53.031000 audit[2265]: NETFILTER_CFG table=nat:37 family=10 entries=2 op=nft_register_chain pid=2265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:53:53.031000 audit[2265]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffde53129b0 a2=0 a3=7ffde531299c items=0 ppid=2236 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:53:53.032000 audit[2266]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:53.032000 audit[2266]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd39ccc80 a2=0 a3=7ffcd39ccc6c items=0 ppid=2236 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:53:53.033000 audit[2267]: NETFILTER_CFG table=filter:39 family=10 entries=2 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:53:53.033000 audit[2267]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd8adf5f0 a2=0 a3=7fffd8adf5dc items=0 ppid=2236 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:53:53.034000 audit[2268]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:53.034000 audit[2268]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9ea4f8a0 a2=0 a3=7ffd9ea4f88c items=0 ppid=2236 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:53:53.119393 kubelet[2236]: I0813 00:53:53.119292 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.119989 kubelet[2236]: E0813 00:53:53.119954 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.149805 kubelet[2236]: E0813 00:53:53.149770 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-1859c445b4?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="400ms" Aug 13 00:53:53.250279 kubelet[2236]: I0813 00:53:53.250218 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01a57b2e078119d664a18e5709382536-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-1859c445b4\" (UID: \"01a57b2e078119d664a18e5709382536\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.250279 kubelet[2236]: I0813 00:53:53.250274 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01a57b2e078119d664a18e5709382536-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-1859c445b4\" (UID: \"01a57b2e078119d664a18e5709382536\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.250548 kubelet[2236]: I0813 00:53:53.250311 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.250548 kubelet[2236]: I0813 00:53:53.250343 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c850e24e713a82e1867ac86877f28e85-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-1859c445b4\" (UID: \"c850e24e713a82e1867ac86877f28e85\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.250548 kubelet[2236]: I0813 00:53:53.250369 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01a57b2e078119d664a18e5709382536-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-1859c445b4\" (UID: \"01a57b2e078119d664a18e5709382536\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.250548 kubelet[2236]: I0813 00:53:53.250395 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.250548 kubelet[2236]: I0813 00:53:53.250421 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.250755 kubelet[2236]: I0813 00:53:53.250450 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.250755 kubelet[2236]: I0813 00:53:53.250479 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.322323 kubelet[2236]: I0813 00:53:53.322280 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.322746 kubelet[2236]: E0813 00:53:53.322711 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.439351 env[1532]: time="2025-08-13T00:53:53.438639448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-1859c445b4,Uid:d02a45e7bac14c9bcea2255908245498,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:53.440221 env[1532]: time="2025-08-13T00:53:53.440177001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-1859c445b4,Uid:01a57b2e078119d664a18e5709382536,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:53.440881 env[1532]: time="2025-08-13T00:53:53.440827524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-1859c445b4,Uid:c850e24e713a82e1867ac86877f28e85,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:53.550918 kubelet[2236]: E0813 00:53:53.550844 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-1859c445b4?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="800ms" Aug 13 00:53:53.725046 kubelet[2236]: I0813 00:53:53.724949 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:53.725607 kubelet[2236]: E0813 00:53:53.725551 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:54.034589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846767975.mount: Deactivated successfully. Aug 13 00:53:54.066647 env[1532]: time="2025-08-13T00:53:54.066593143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.069590 env[1532]: time="2025-08-13T00:53:54.069554543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.091353 env[1532]: time="2025-08-13T00:53:54.091298776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.100531 env[1532]: time="2025-08-13T00:53:54.100492587Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.108040 env[1532]: time="2025-08-13T00:53:54.107999740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.114566 env[1532]: time="2025-08-13T00:53:54.114530860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.117743 env[1532]: time="2025-08-13T00:53:54.117710368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.122692 env[1532]: time="2025-08-13T00:53:54.122660735Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.125525 env[1532]: time="2025-08-13T00:53:54.125494630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.129430 env[1532]: time="2025-08-13T00:53:54.129399762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.132828 env[1532]: time="2025-08-13T00:53:54.132798277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.135754 env[1532]: time="2025-08-13T00:53:54.135722476Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:54.179358 kubelet[2236]: W0813 00:53:54.179321 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 13 00:53:54.179505 kubelet[2236]: E0813 00:53:54.179367 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:54.213112 env[1532]: time="2025-08-13T00:53:54.213042185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:54.213112 env[1532]: time="2025-08-13T00:53:54.213080186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:54.213112 env[1532]: time="2025-08-13T00:53:54.213094087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:54.213683 env[1532]: time="2025-08-13T00:53:54.213633005Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/562278e822f9262e7d7f387a3d2c1c7c711ce5b97c5d402368c811bba9db7493 pid=2277 runtime=io.containerd.runc.v2 Aug 13 00:53:54.240319 env[1532]: time="2025-08-13T00:53:54.240235803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:54.240562 env[1532]: time="2025-08-13T00:53:54.240534213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:54.240719 env[1532]: time="2025-08-13T00:53:54.240695118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:54.241157 env[1532]: time="2025-08-13T00:53:54.241096632Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ada0423340e4c0712158cbfdc7e63d08a1c7b625aebdcfceec6e11001a934e76 pid=2305 runtime=io.containerd.runc.v2 Aug 13 00:53:54.249914 env[1532]: time="2025-08-13T00:53:54.249761724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:54.249914 env[1532]: time="2025-08-13T00:53:54.249806426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:54.249914 env[1532]: time="2025-08-13T00:53:54.249835226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:54.250319 env[1532]: time="2025-08-13T00:53:54.250264941Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/561e5d66febd5e4b814ebc769db987945728086c0fb9e35010f68504b8b14325 pid=2324 runtime=io.containerd.runc.v2 Aug 13 00:53:54.347096 env[1532]: time="2025-08-13T00:53:54.347044607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-1859c445b4,Uid:01a57b2e078119d664a18e5709382536,Namespace:kube-system,Attempt:0,} returns sandbox id \"562278e822f9262e7d7f387a3d2c1c7c711ce5b97c5d402368c811bba9db7493\"" Aug 13 00:53:54.350316 env[1532]: time="2025-08-13T00:53:54.350287516Z" level=info msg="CreateContainer within sandbox \"562278e822f9262e7d7f387a3d2c1c7c711ce5b97c5d402368c811bba9db7493\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:53:54.351975 kubelet[2236]: E0813 00:53:54.351928 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-1859c445b4?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="1.6s" Aug 13 00:53:54.361635 env[1532]: time="2025-08-13T00:53:54.361602398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-1859c445b4,Uid:c850e24e713a82e1867ac86877f28e85,Namespace:kube-system,Attempt:0,} returns sandbox id \"561e5d66febd5e4b814ebc769db987945728086c0fb9e35010f68504b8b14325\"" Aug 13 00:53:54.364149 env[1532]: time="2025-08-13T00:53:54.364121783Z" level=info msg="CreateContainer within sandbox \"561e5d66febd5e4b814ebc769db987945728086c0fb9e35010f68504b8b14325\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:53:54.370867 env[1532]: time="2025-08-13T00:53:54.370748007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-1859c445b4,Uid:d02a45e7bac14c9bcea2255908245498,Namespace:kube-system,Attempt:0,} returns sandbox id \"ada0423340e4c0712158cbfdc7e63d08a1c7b625aebdcfceec6e11001a934e76\"" Aug 13 00:53:54.372845 env[1532]: time="2025-08-13T00:53:54.372821677Z" level=info msg="CreateContainer within sandbox \"ada0423340e4c0712158cbfdc7e63d08a1c7b625aebdcfceec6e11001a934e76\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:53:54.375399 kubelet[2236]: W0813 00:53:54.375271 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 13 00:53:54.375399 kubelet[2236]: E0813 00:53:54.375357 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:54.416209 kubelet[2236]: W0813 00:53:54.416092 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 13 00:53:54.416209 kubelet[2236]: E0813 00:53:54.416173 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:54.422018 env[1532]: time="2025-08-13T00:53:54.421962535Z" level=info msg="CreateContainer within sandbox \"562278e822f9262e7d7f387a3d2c1c7c711ce5b97c5d402368c811bba9db7493\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ee8864a6d1139b9ce8eb89fab4588c1b678e4a39111a8073b5a05fe32354c7a\"" Aug 13 00:53:54.422962 env[1532]: time="2025-08-13T00:53:54.422933968Z" level=info msg="StartContainer for \"1ee8864a6d1139b9ce8eb89fab4588c1b678e4a39111a8073b5a05fe32354c7a\"" Aug 13 00:53:54.430353 env[1532]: time="2025-08-13T00:53:54.430313217Z" level=info msg="CreateContainer within sandbox \"561e5d66febd5e4b814ebc769db987945728086c0fb9e35010f68504b8b14325\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3102cc1c156cc8a5b08e791b372df592e567196016987b02bad7e2838c36a1ad\"" Aug 13 00:53:54.431141 env[1532]: time="2025-08-13T00:53:54.431109044Z" level=info msg="StartContainer for \"3102cc1c156cc8a5b08e791b372df592e567196016987b02bad7e2838c36a1ad\"" Aug 13 00:53:54.437634 env[1532]: time="2025-08-13T00:53:54.437608863Z" level=info msg="CreateContainer within sandbox \"ada0423340e4c0712158cbfdc7e63d08a1c7b625aebdcfceec6e11001a934e76\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d6c292e3bb6870cebfb4fa525433ac34bbbd8968014eb731c9df36617a9ecc48\"" Aug 13 00:53:54.438156 env[1532]: time="2025-08-13T00:53:54.438131381Z" level=info msg="StartContainer for \"d6c292e3bb6870cebfb4fa525433ac34bbbd8968014eb731c9df36617a9ecc48\"" Aug 13 00:53:54.516939 kubelet[2236]: W0813 00:53:54.516522 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-1859c445b4&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 13 00:53:54.516939 kubelet[2236]: E0813 00:53:54.516628 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-1859c445b4&limit=500&resourceVersion=0\": dial tcp 10.200.4.17:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:54.530167 kubelet[2236]: I0813 00:53:54.530134 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:54.530610 kubelet[2236]: E0813 00:53:54.530577 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:54.543114 env[1532]: time="2025-08-13T00:53:54.543065422Z" level=info msg="StartContainer for \"1ee8864a6d1139b9ce8eb89fab4588c1b678e4a39111a8073b5a05fe32354c7a\" returns successfully" Aug 13 00:53:54.598189 env[1532]: time="2025-08-13T00:53:54.598072678Z" level=info msg="StartContainer for \"d6c292e3bb6870cebfb4fa525433ac34bbbd8968014eb731c9df36617a9ecc48\" returns successfully" Aug 13 00:53:54.603274 env[1532]: time="2025-08-13T00:53:54.603228752Z" level=info msg="StartContainer for \"3102cc1c156cc8a5b08e791b372df592e567196016987b02bad7e2838c36a1ad\" returns successfully" Aug 13 00:53:56.132868 kubelet[2236]: I0813 00:53:56.132829 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:56.387662 kubelet[2236]: E0813 00:53:56.387550 2236 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-a-1859c445b4\" not found" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:56.435648 kubelet[2236]: I0813 00:53:56.435595 2236 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:56.435648 kubelet[2236]: E0813 00:53:56.435652 2236 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-a-1859c445b4\": node \"ci-3510.3.8-a-1859c445b4\" not found" Aug 13 00:53:56.571774 kubelet[2236]: E0813 00:53:56.571731 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-1859c445b4\" not found" Aug 13 00:53:56.672707 kubelet[2236]: E0813 00:53:56.672598 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-1859c445b4\" not found" Aug 13 00:53:56.773167 kubelet[2236]: E0813 00:53:56.773124 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-1859c445b4\" not found" Aug 13 00:53:56.929556 kubelet[2236]: I0813 00:53:56.929244 2236 apiserver.go:52] "Watching apiserver" Aug 13 00:53:56.949330 kubelet[2236]: I0813 00:53:56.949288 2236 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:53:58.484110 systemd[1]: Reloading. Aug 13 00:53:58.573814 /usr/lib/systemd/system-generators/torcx-generator[2527]: time="2025-08-13T00:53:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:53:58.573851 /usr/lib/systemd/system-generators/torcx-generator[2527]: time="2025-08-13T00:53:58Z" level=info msg="torcx already run" Aug 13 00:53:58.698497 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:58.698710 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:58.721058 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:58.841606 systemd[1]: Stopping kubelet.service... Aug 13 00:53:58.882722 kernel: kauditd_printk_skb: 43 callbacks suppressed Aug 13 00:53:58.882891 kernel: audit: type=1131 audit(1755046438.860:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:58.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:58.861305 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:53:58.861684 systemd[1]: Stopped kubelet.service. Aug 13 00:53:58.869048 systemd[1]: Starting kubelet.service... Aug 13 00:53:59.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:59.086477 systemd[1]: Started kubelet.service. Aug 13 00:53:59.104945 kernel: audit: type=1130 audit(1755046439.087:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:59.134489 kubelet[2605]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:53:59.134953 kubelet[2605]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:53:59.135025 kubelet[2605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:53:59.135293 kubelet[2605]: I0813 00:53:59.135240 2605 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:53:59.141656 kubelet[2605]: I0813 00:53:59.141626 2605 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:53:59.141656 kubelet[2605]: I0813 00:53:59.141646 2605 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:53:59.141979 kubelet[2605]: I0813 00:53:59.141959 2605 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:53:59.143249 kubelet[2605]: I0813 00:53:59.143225 2605 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:53:59.145535 kubelet[2605]: I0813 00:53:59.145507 2605 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:53:59.149220 kubelet[2605]: E0813 00:53:59.149194 2605 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:53:59.149307 kubelet[2605]: I0813 00:53:59.149299 2605 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:53:59.153271 kubelet[2605]: I0813 00:53:59.153253 2605 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:53:59.153967 kubelet[2605]: I0813 00:53:59.153951 2605 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:53:59.154232 kubelet[2605]: I0813 00:53:59.154193 2605 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:53:59.154702 kubelet[2605]: I0813 00:53:59.154376 2605 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-1859c445b4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:53:59.154895 kubelet[2605]: I0813 00:53:59.154880 2605 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:53:59.154983 kubelet[2605]: I0813 00:53:59.154975 2605 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:53:59.155073 kubelet[2605]: I0813 00:53:59.155066 2605 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:53:59.155240 kubelet[2605]: I0813 00:53:59.155231 2605 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:53:59.155314 kubelet[2605]: I0813 00:53:59.155306 2605 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:53:59.155401 kubelet[2605]: I0813 00:53:59.155393 2605 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:53:59.155471 kubelet[2605]: I0813 00:53:59.155464 2605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:53:59.157985 kubelet[2605]: I0813 00:53:59.157968 2605 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:53:59.158524 kubelet[2605]: I0813 00:53:59.158510 2605 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:53:59.159082 kubelet[2605]: I0813 00:53:59.159068 2605 server.go:1274] "Started kubelet" Aug 13 00:53:59.166000 audit[2605]: AVC avc: denied { mac_admin } for pid=2605 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:59.180240 kubelet[2605]: E0813 00:53:59.172819 2605 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:53:59.180240 kubelet[2605]: I0813 00:53:59.173098 2605 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:53:59.180240 kubelet[2605]: I0813 00:53:59.173921 2605 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:53:59.180240 kubelet[2605]: I0813 00:53:59.174683 2605 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:53:59.180240 kubelet[2605]: I0813 00:53:59.174872 2605 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:53:59.180576 kubelet[2605]: I0813 00:53:59.180553 2605 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:53:59.180686 kubelet[2605]: I0813 00:53:59.180671 2605 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:53:59.180785 kubelet[2605]: I0813 00:53:59.180776 2605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:53:59.183455 kernel: audit: type=1400 audit(1755046439.166:224): avc: denied { mac_admin } for pid=2605 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:59.183576 kubelet[2605]: I0813 00:53:59.182031 2605 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:53:59.199566 kernel: audit: type=1401 audit(1755046439.166:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:53:59.166000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:53:59.199764 kubelet[2605]: I0813 00:53:59.184593 2605 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:53:59.199764 kubelet[2605]: I0813 00:53:59.184692 2605 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:53:59.199764 kubelet[2605]: I0813 00:53:59.184798 2605 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:53:59.199764 kubelet[2605]: I0813 00:53:59.194373 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:53:59.199764 kubelet[2605]: I0813 00:53:59.196387 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:53:59.199764 kubelet[2605]: I0813 00:53:59.196502 2605 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:53:59.199764 kubelet[2605]: I0813 00:53:59.196527 2605 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:53:59.199764 kubelet[2605]: E0813 00:53:59.196589 2605 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:53:59.166000 audit[2605]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b01e90 a1=c0007ec9a8 a2=c000b01e60 a3=25 items=0 ppid=1 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:59.209202 kubelet[2605]: I0813 00:53:59.201646 2605 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:53:59.209202 kubelet[2605]: I0813 00:53:59.201736 2605 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:53:59.214193 kubelet[2605]: I0813 00:53:59.214157 2605 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:53:59.221121 kernel: audit: type=1300 audit(1755046439.166:224): arch=c000003e syscall=188 success=no exit=-22 a0=c000b01e90 a1=c0007ec9a8 a2=c000b01e60 a3=25 items=0 ppid=1 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:59.240081 kernel: audit: type=1327 audit(1755046439.166:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:53:59.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:53:59.177000 audit[2605]: AVC avc: denied { mac_admin } for pid=2605 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:59.262730 kernel: audit: type=1400 audit(1755046439.177:225): avc: denied { mac_admin } for pid=2605 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:59.262877 kernel: audit: type=1401 audit(1755046439.177:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:53:59.177000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:53:59.177000 audit[2605]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000682680 a1=c0007ec9c0 a2=c000b01f20 a3=25 items=0 ppid=1 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:59.282987 kernel: audit: type=1300 audit(1755046439.177:225): arch=c000003e syscall=188 success=no exit=-22 a0=c000682680 a1=c0007ec9c0 a2=c000b01f20 a3=25 items=0 ppid=1 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:59.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.292837 2605 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.292866 2605 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.292885 2605 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.293066 2605 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.293076 2605 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.293109 2605 policy_none.go:49] "None policy: Start" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.293657 2605 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.293675 2605 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.293834 2605 state_mem.go:75] "Updated machine memory state" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.295235 2605 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.295298 2605 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.295473 2605 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:53:59.301430 kubelet[2605]: I0813 00:53:59.295535 2605 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:53:59.302060 kernel: audit: type=1327 audit(1755046439.177:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:53:59.305889 kubelet[2605]: I0813 00:53:59.303046 2605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:53:59.293000 audit[2605]: AVC avc: denied { mac_admin } for pid=2605 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:59.293000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:53:59.293000 audit[2605]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00115a960 a1=c001160240 a2=c00115a930 a3=25 items=0 ppid=1 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:59.293000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:53:59.317392 kubelet[2605]: W0813 00:53:59.317225 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:53:59.317510 kubelet[2605]: W0813 00:53:59.317494 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:53:59.319384 kubelet[2605]: W0813 00:53:59.319351 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:53:59.386291 kubelet[2605]: I0813 00:53:59.386177 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01a57b2e078119d664a18e5709382536-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-1859c445b4\" (UID: \"01a57b2e078119d664a18e5709382536\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.386291 kubelet[2605]: I0813 00:53:59.386218 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.386291 kubelet[2605]: I0813 00:53:59.386250 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.386291 kubelet[2605]: I0813 00:53:59.386271 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.386540 kubelet[2605]: I0813 00:53:59.386294 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.386540 kubelet[2605]: I0813 00:53:59.386315 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01a57b2e078119d664a18e5709382536-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-1859c445b4\" (UID: \"01a57b2e078119d664a18e5709382536\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.386540 kubelet[2605]: I0813 00:53:59.386337 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01a57b2e078119d664a18e5709382536-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-1859c445b4\" (UID: \"01a57b2e078119d664a18e5709382536\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.386540 kubelet[2605]: I0813 00:53:59.386356 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d02a45e7bac14c9bcea2255908245498-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" (UID: \"d02a45e7bac14c9bcea2255908245498\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.386540 kubelet[2605]: I0813 00:53:59.386377 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c850e24e713a82e1867ac86877f28e85-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-1859c445b4\" (UID: \"c850e24e713a82e1867ac86877f28e85\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.416066 kubelet[2605]: I0813 00:53:59.416032 2605 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.427918 kubelet[2605]: I0813 00:53:59.427880 2605 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:53:59.428078 kubelet[2605]: I0813 00:53:59.427959 2605 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:00.157116 kubelet[2605]: I0813 00:54:00.157071 2605 apiserver.go:52] "Watching apiserver" Aug 13 00:54:00.185368 kubelet[2605]: I0813 00:54:00.185325 2605 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:54:00.271130 kubelet[2605]: W0813 00:54:00.271100 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:54:00.271455 kubelet[2605]: E0813 00:54:00.271429 2605 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-a-1859c445b4\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" Aug 13 00:54:00.271766 kubelet[2605]: W0813 00:54:00.271350 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:54:00.271971 kubelet[2605]: E0813 00:54:00.271956 2605 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-a-1859c445b4\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-a-1859c445b4" Aug 13 00:54:00.306333 kubelet[2605]: I0813 00:54:00.306268 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-a-1859c445b4" podStartSLOduration=1.306243011 podStartE2EDuration="1.306243011s" podCreationTimestamp="2025-08-13 00:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:00.290240948 +0000 UTC m=+1.192601927" watchObservedRunningTime="2025-08-13 00:54:00.306243011 +0000 UTC m=+1.208604090" Aug 13 00:54:00.321056 kubelet[2605]: I0813 00:54:00.320993 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-a-1859c445b4" podStartSLOduration=1.320969638 podStartE2EDuration="1.320969638s" podCreationTimestamp="2025-08-13 00:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:00.307051435 +0000 UTC m=+1.209412414" watchObservedRunningTime="2025-08-13 00:54:00.320969638 +0000 UTC m=+1.223330617" Aug 13 00:54:00.339256 kubelet[2605]: I0813 00:54:00.339192 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-1859c445b4" podStartSLOduration=1.3391704660000001 podStartE2EDuration="1.339170466s" podCreationTimestamp="2025-08-13 00:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:00.321511554 +0000 UTC m=+1.223872633" watchObservedRunningTime="2025-08-13 00:54:00.339170466 +0000 UTC m=+1.241531545" Aug 13 00:54:03.247885 kubelet[2605]: I0813 00:54:03.247839 2605 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:54:03.248834 env[1532]: time="2025-08-13T00:54:03.248785377Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:54:03.249244 kubelet[2605]: I0813 00:54:03.249073 2605 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:54:04.214915 kubelet[2605]: I0813 00:54:04.214841 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba366d13-9182-438f-b258-8d901daa4e61-kube-proxy\") pod \"kube-proxy-95wjt\" (UID: \"ba366d13-9182-438f-b258-8d901daa4e61\") " pod="kube-system/kube-proxy-95wjt" Aug 13 00:54:04.214915 kubelet[2605]: I0813 00:54:04.214918 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba366d13-9182-438f-b258-8d901daa4e61-lib-modules\") pod \"kube-proxy-95wjt\" (UID: \"ba366d13-9182-438f-b258-8d901daa4e61\") " pod="kube-system/kube-proxy-95wjt" Aug 13 00:54:04.215169 kubelet[2605]: I0813 00:54:04.214949 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8qv\" (UniqueName: \"kubernetes.io/projected/ba366d13-9182-438f-b258-8d901daa4e61-kube-api-access-jc8qv\") pod \"kube-proxy-95wjt\" (UID: \"ba366d13-9182-438f-b258-8d901daa4e61\") " pod="kube-system/kube-proxy-95wjt" Aug 13 00:54:04.215169 kubelet[2605]: I0813 00:54:04.214981 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba366d13-9182-438f-b258-8d901daa4e61-xtables-lock\") pod \"kube-proxy-95wjt\" (UID: \"ba366d13-9182-438f-b258-8d901daa4e61\") " pod="kube-system/kube-proxy-95wjt" Aug 13 00:54:04.323299 kubelet[2605]: I0813 00:54:04.323262 2605 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:54:04.415945 env[1532]: time="2025-08-13T00:54:04.415898428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95wjt,Uid:ba366d13-9182-438f-b258-8d901daa4e61,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:04.416989 kubelet[2605]: I0813 00:54:04.416959 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9pf\" (UniqueName: \"kubernetes.io/projected/7bc20e27-dc75-4718-bb7b-3cd6e8056a02-kube-api-access-zn9pf\") pod \"tigera-operator-5bf8dfcb4-8xj86\" (UID: \"7bc20e27-dc75-4718-bb7b-3cd6e8056a02\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-8xj86" Aug 13 00:54:04.417212 kubelet[2605]: I0813 00:54:04.417172 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bc20e27-dc75-4718-bb7b-3cd6e8056a02-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-8xj86\" (UID: \"7bc20e27-dc75-4718-bb7b-3cd6e8056a02\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-8xj86" Aug 13 00:54:04.455052 env[1532]: time="2025-08-13T00:54:04.454982054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:04.455221 env[1532]: time="2025-08-13T00:54:04.455022356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:04.455221 env[1532]: time="2025-08-13T00:54:04.455036656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:04.455221 env[1532]: time="2025-08-13T00:54:04.455183660Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a515b4618eb7d58c266c5ce0ea990dddd03551630a7a0eab482044722e33300 pid=2674 runtime=io.containerd.runc.v2 Aug 13 00:54:04.500225 env[1532]: time="2025-08-13T00:54:04.500136441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95wjt,Uid:ba366d13-9182-438f-b258-8d901daa4e61,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a515b4618eb7d58c266c5ce0ea990dddd03551630a7a0eab482044722e33300\"" Aug 13 00:54:04.504471 env[1532]: time="2025-08-13T00:54:04.504433854Z" level=info msg="CreateContainer within sandbox \"8a515b4618eb7d58c266c5ce0ea990dddd03551630a7a0eab482044722e33300\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:54:04.553911 env[1532]: time="2025-08-13T00:54:04.553841152Z" level=info msg="CreateContainer within sandbox \"8a515b4618eb7d58c266c5ce0ea990dddd03551630a7a0eab482044722e33300\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8e2841eaa2b7a6b2e0595bb778d53c780ac26dba531331deccaac5bc7047e3e0\"" Aug 13 00:54:04.555160 env[1532]: time="2025-08-13T00:54:04.554659973Z" level=info msg="StartContainer for \"8e2841eaa2b7a6b2e0595bb778d53c780ac26dba531331deccaac5bc7047e3e0\"" Aug 13 00:54:04.607556 env[1532]: time="2025-08-13T00:54:04.607505362Z" level=info msg="StartContainer for \"8e2841eaa2b7a6b2e0595bb778d53c780ac26dba531331deccaac5bc7047e3e0\" returns successfully" Aug 13 00:54:04.692270 env[1532]: time="2025-08-13T00:54:04.692209588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-8xj86,Uid:7bc20e27-dc75-4718-bb7b-3cd6e8056a02,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:54:04.756861 env[1532]: time="2025-08-13T00:54:04.756564078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:04.756861 env[1532]: time="2025-08-13T00:54:04.756607080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:04.756861 env[1532]: time="2025-08-13T00:54:04.756628480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:04.757339 env[1532]: time="2025-08-13T00:54:04.757291998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e39281c60fca9dbf61381d4636be00bb48552de03fa433bd2fdb49dee10fd2e6 pid=2759 runtime=io.containerd.runc.v2 Aug 13 00:54:04.821129 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 00:54:04.821282 kernel: audit: type=1325 audit(1755046444.808:227): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.808000 audit[2811]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.808000 audit[2811]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef9092c40 a2=0 a3=7ffef9092c2c items=0 ppid=2726 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.849120 env[1532]: time="2025-08-13T00:54:04.845222508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-8xj86,Uid:7bc20e27-dc75-4718-bb7b-3cd6e8056a02,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e39281c60fca9dbf61381d4636be00bb48552de03fa433bd2fdb49dee10fd2e6\"" Aug 13 00:54:04.849120 env[1532]: time="2025-08-13T00:54:04.847736874Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:54:04.855005 kernel: audit: type=1300 audit(1755046444.808:227): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef9092c40 a2=0 a3=7ffef9092c2c items=0 ppid=2726 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:54:04.865049 kernel: audit: type=1327 audit(1755046444.808:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:54:04.865121 kernel: audit: type=1325 audit(1755046444.819:228): table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:04.819000 audit[2812]: NETFILTER_CFG table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:04.819000 audit[2812]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd8885f80 a2=0 a3=7ffdd8885f6c items=0 ppid=2726 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.891766 kernel: audit: type=1300 audit(1755046444.819:228): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd8885f80 a2=0 a3=7ffdd8885f6c items=0 ppid=2726 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:54:04.819000 audit[2814]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=2814 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:04.901918 kernel: audit: type=1327 audit(1755046444.819:228): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:54:04.901969 kernel: audit: type=1325 audit(1755046444.819:229): table=nat:43 family=10 entries=1 op=nft_register_chain pid=2814 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:04.819000 audit[2814]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb82610c0 a2=0 a3=7fffb82610ac items=0 ppid=2726 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.910874 kernel: audit: type=1300 audit(1755046444.819:229): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb82610c0 a2=0 a3=7fffb82610ac items=0 ppid=2726 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.937893 kernel: audit: type=1327 audit(1755046444.819:229): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:54:04.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:54:04.819000 audit[2815]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:04.949644 kernel: audit: type=1325 audit(1755046444.819:230): table=filter:44 family=10 entries=1 op=nft_register_chain pid=2815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:04.819000 audit[2815]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff253f97a0 a2=0 a3=7fff253f978c items=0 ppid=2726 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:54:04.834000 audit[2813]: NETFILTER_CFG table=nat:45 family=2 entries=1 op=nft_register_chain pid=2813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.834000 audit[2813]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa2e606c0 a2=0 a3=7fffa2e606ac items=0 ppid=2726 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.834000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:54:04.837000 audit[2822]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.837000 audit[2822]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3f52d5e0 a2=0 a3=7fff3f52d5cc items=0 ppid=2726 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.837000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:54:04.927000 audit[2825]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.927000 audit[2825]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffefc774ad0 a2=0 a3=7ffefc774abc items=0 ppid=2726 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:54:04.930000 audit[2827]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.930000 audit[2827]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc2ef630d0 a2=0 a3=7ffc2ef630bc items=0 ppid=2726 pid=2827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.930000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 13 00:54:04.934000 audit[2830]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.934000 audit[2830]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc15c1d60 a2=0 a3=7fffc15c1d4c items=0 ppid=2726 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 13 00:54:04.936000 audit[2831]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.936000 audit[2831]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeeef355b0 a2=0 a3=7ffeeef3559c items=0 ppid=2726 pid=2831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:54:04.939000 audit[2833]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.939000 audit[2833]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe565ddb70 a2=0 a3=7ffe565ddb5c items=0 ppid=2726 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:54:04.940000 audit[2834]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.940000 audit[2834]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcf9ef0e0 a2=0 a3=7ffdcf9ef0cc items=0 ppid=2726 pid=2834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:54:04.943000 audit[2836]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.943000 audit[2836]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffeb41ff20 a2=0 a3=7fffeb41ff0c items=0 ppid=2726 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:54:04.947000 audit[2839]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.947000 audit[2839]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff45cc45b0 a2=0 a3=7fff45cc459c items=0 ppid=2726 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.947000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 13 00:54:04.949000 audit[2840]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2840 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.949000 audit[2840]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec6708310 a2=0 a3=7ffec67082fc items=0 ppid=2726 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.949000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:54:04.952000 audit[2842]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2842 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.952000 audit[2842]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3ff1c230 a2=0 a3=7ffc3ff1c21c items=0 ppid=2726 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.952000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:54:04.953000 audit[2843]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.953000 audit[2843]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeaab1b090 a2=0 a3=7ffeaab1b07c items=0 ppid=2726 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:54:04.956000 audit[2845]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.956000 audit[2845]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd64923350 a2=0 a3=7ffd6492333c items=0 ppid=2726 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:54:04.960000 audit[2848]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=2848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.960000 audit[2848]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe46492480 a2=0 a3=7ffe4649246c items=0 ppid=2726 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:54:04.963000 audit[2851]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2851 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.963000 audit[2851]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb118c000 a2=0 a3=7fffb118bfec items=0 ppid=2726 pid=2851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:54:04.964000 audit[2852]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.964000 audit[2852]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa0660150 a2=0 a3=7fffa066013c items=0 ppid=2726 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:54:04.967000 audit[2854]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.967000 audit[2854]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff8815ad70 a2=0 a3=7fff8815ad5c items=0 ppid=2726 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:54:04.971000 audit[2857]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=2857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.971000 audit[2857]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff2603cd80 a2=0 a3=7fff2603cd6c items=0 ppid=2726 pid=2857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:54:04.972000 audit[2858]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=2858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.972000 audit[2858]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc923f2a60 a2=0 a3=7ffc923f2a4c items=0 ppid=2726 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.972000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:54:04.974000 audit[2860]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=2860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:04.974000 audit[2860]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc4e639850 a2=0 a3=7ffc4e63983c items=0 ppid=2726 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:04.974000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:54:05.100000 audit[2866]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:05.100000 audit[2866]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd51cfa190 a2=0 a3=7ffd51cfa17c items=0 ppid=2726 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:05.143000 audit[2866]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:05.143000 audit[2866]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd51cfa190 a2=0 a3=7ffd51cfa17c items=0 ppid=2726 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:05.146000 audit[2871]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2871 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.146000 audit[2871]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe2404b0c0 a2=0 a3=7ffe2404b0ac items=0 ppid=2726 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:54:05.149000 audit[2873]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=2873 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.149000 audit[2873]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc87619d40 a2=0 a3=7ffc87619d2c items=0 ppid=2726 pid=2873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 13 00:54:05.153000 audit[2876]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=2876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.153000 audit[2876]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffced7b77a0 a2=0 a3=7ffced7b778c items=0 ppid=2726 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 13 00:54:05.154000 audit[2877]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=2877 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.154000 audit[2877]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd88073df0 a2=0 a3=7ffd88073ddc items=0 ppid=2726 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.154000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:54:05.157000 audit[2879]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.157000 audit[2879]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc379c7ea0 a2=0 a3=7ffc379c7e8c items=0 ppid=2726 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:54:05.158000 audit[2880]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2880 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.158000 audit[2880]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8fb12eb0 a2=0 a3=7ffc8fb12e9c items=0 ppid=2726 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:54:05.161000 audit[2882]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2882 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.161000 audit[2882]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd077e3000 a2=0 a3=7ffd077e2fec items=0 ppid=2726 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 13 00:54:05.164000 audit[2885]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=2885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.164000 audit[2885]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdf98c2670 a2=0 a3=7ffdf98c265c items=0 ppid=2726 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:54:05.165000 audit[2886]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=2886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.165000 audit[2886]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6ba214a0 a2=0 a3=7ffd6ba2148c items=0 ppid=2726 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:54:05.168000 audit[2888]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.168000 audit[2888]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff79d8c580 a2=0 a3=7fff79d8c56c items=0 ppid=2726 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.168000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:54:05.169000 audit[2889]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=2889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.169000 audit[2889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4e59fa00 a2=0 a3=7ffe4e59f9ec items=0 ppid=2726 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:54:05.171000 audit[2891]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=2891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.171000 audit[2891]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdbf0a3420 a2=0 a3=7ffdbf0a340c items=0 ppid=2726 pid=2891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:54:05.175000 audit[2894]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=2894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.175000 audit[2894]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbe9df020 a2=0 a3=7ffcbe9df00c items=0 ppid=2726 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:54:05.179000 audit[2897]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=2897 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.179000 audit[2897]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5194e1b0 a2=0 a3=7fff5194e19c items=0 ppid=2726 pid=2897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 13 00:54:05.180000 audit[2898]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.180000 audit[2898]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0a306ea0 a2=0 a3=7fff0a306e8c items=0 ppid=2726 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.180000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:54:05.182000 audit[2900]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.182000 audit[2900]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd269bff50 a2=0 a3=7ffd269bff3c items=0 ppid=2726 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.182000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:54:05.185000 audit[2903]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=2903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.185000 audit[2903]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe233ee780 a2=0 a3=7ffe233ee76c items=0 ppid=2726 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.185000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:54:05.187000 audit[2904]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=2904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.187000 audit[2904]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebd186a10 a2=0 a3=7ffebd1869fc items=0 ppid=2726 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:54:05.189000 audit[2906]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=2906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.189000 audit[2906]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeda210bd0 a2=0 a3=7ffeda210bbc items=0 ppid=2726 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:54:05.190000 audit[2907]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.190000 audit[2907]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd6d10640 a2=0 a3=7ffdd6d1062c items=0 ppid=2726 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:54:05.192000 audit[2909]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2909 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.192000 audit[2909]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffabbe1480 a2=0 a3=7fffabbe146c items=0 ppid=2726 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:54:05.197000 audit[2912]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=2912 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:05.197000 audit[2912]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd0bf7d5d0 a2=0 a3=7ffd0bf7d5bc items=0 ppid=2726 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:54:05.200000 audit[2914]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=2914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:54:05.200000 audit[2914]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff54c8eb10 a2=0 a3=7fff54c8eafc items=0 ppid=2726 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.200000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:05.201000 audit[2914]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:54:05.201000 audit[2914]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff54c8eb10 a2=0 a3=7fff54c8eafc items=0 ppid=2726 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:05.201000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:06.137688 env[1532]: time="2025-08-13T00:54:06.137611873Z" level=error msg="PullImage \"quay.io/tigera/operator:v1.38.3\" failed" error="failed to pull and unpack image \"quay.io/tigera/operator:v1.38.3\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/8b/8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250813%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250813T005406Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=38ab1dc4d0d2d25403c309682609fd7d1fd4ba883f470593df9c666b1be776a3®ion=us-east-1&namespace=tigera&repo_name=operator&akamai_signature=exp=1755047346~hmac=1aedf5160a393a51bf15bae9a990c1fa38c9a2861ed05ca83fdf8e43ac0ad292\": dial tcp: lookup cdn01.quay.io: no such host" Aug 13 00:54:06.138202 kubelet[2605]: E0813 00:54:06.137967 2605 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/tigera/operator:v1.38.3\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/8b/8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250813%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250813T005406Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=38ab1dc4d0d2d25403c309682609fd7d1fd4ba883f470593df9c666b1be776a3®ion=us-east-1&namespace=tigera&repo_name=operator&akamai_signature=exp=1755047346~hmac=1aedf5160a393a51bf15bae9a990c1fa38c9a2861ed05ca83fdf8e43ac0ad292\": dial tcp: lookup cdn01.quay.io: no such host" image="quay.io/tigera/operator:v1.38.3" Aug 13 00:54:06.138202 kubelet[2605]: E0813 00:54:06.138033 2605 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/tigera/operator:v1.38.3\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/8b/8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250813%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250813T005406Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=38ab1dc4d0d2d25403c309682609fd7d1fd4ba883f470593df9c666b1be776a3®ion=us-east-1&namespace=tigera&repo_name=operator&akamai_signature=exp=1755047346~hmac=1aedf5160a393a51bf15bae9a990c1fa38c9a2861ed05ca83fdf8e43ac0ad292\": dial tcp: lookup cdn01.quay.io: no such host" image="quay.io/tigera/operator:v1.38.3" Aug 13 00:54:06.138644 kubelet[2605]: E0813 00:54:06.138181 2605 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tigera-operator,Image:quay.io/tigera/operator:v1.38.3,Command:[operator],Args:[-manage-crds=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:tigera-operator,ValueFrom:nil,},EnvVar{Name:TIGERA_OPERATOR_INIT_IMAGE_VERSION,Value:v1.38.3,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-lib-calico,ReadOnly:true,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn9pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:kubernetes-services-endpoint,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tigera-operator-5bf8dfcb4-8xj86_tigera-operator(7bc20e27-dc75-4718-bb7b-3cd6e8056a02): ErrImagePull: failed to pull and unpack image \"quay.io/tigera/operator:v1.38.3\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/8b/8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250813%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250813T005406Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=38ab1dc4d0d2d25403c309682609fd7d1fd4ba883f470593df9c666b1be776a3®ion=us-east-1&namespace=tigera&repo_name=operator&akamai_signature=exp=1755047346~hmac=1aedf5160a393a51bf15bae9a990c1fa38c9a2861ed05ca83fdf8e43ac0ad292\": dial tcp: lookup cdn01.quay.io: no such host" logger="UnhandledError" Aug 13 00:54:06.139979 kubelet[2605]: E0813 00:54:06.139932 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/tigera/operator:v1.38.3\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/8b/8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250813%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250813T005406Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=38ab1dc4d0d2d25403c309682609fd7d1fd4ba883f470593df9c666b1be776a3®ion=us-east-1&namespace=tigera&repo_name=operator&akamai_signature=exp=1755047346~hmac=1aedf5160a393a51bf15bae9a990c1fa38c9a2861ed05ca83fdf8e43ac0ad292\\\": dial tcp: lookup cdn01.quay.io: no such host\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-8xj86" podUID="7bc20e27-dc75-4718-bb7b-3cd6e8056a02" Aug 13 00:54:06.264974 kubelet[2605]: E0813 00:54:06.264921 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tigera/operator:v1.38.3\\\"\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-8xj86" podUID="7bc20e27-dc75-4718-bb7b-3cd6e8056a02" Aug 13 00:54:06.277996 kubelet[2605]: I0813 00:54:06.277403 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-95wjt" podStartSLOduration=2.277360773 podStartE2EDuration="2.277360773s" podCreationTimestamp="2025-08-13 00:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:05.29099424 +0000 UTC m=+6.193355219" watchObservedRunningTime="2025-08-13 00:54:06.277360773 +0000 UTC m=+7.179721852" Aug 13 00:54:18.200133 env[1532]: time="2025-08-13T00:54:18.199270951Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:54:20.507242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476440006.mount: Deactivated successfully. Aug 13 00:54:21.287889 env[1532]: time="2025-08-13T00:54:21.287835415Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:21.295718 env[1532]: time="2025-08-13T00:54:21.295682756Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:21.298912 env[1532]: time="2025-08-13T00:54:21.298873413Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:21.305568 env[1532]: time="2025-08-13T00:54:21.305534733Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:21.306045 env[1532]: time="2025-08-13T00:54:21.306013441Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:54:21.309620 env[1532]: time="2025-08-13T00:54:21.309589405Z" level=info msg="CreateContainer within sandbox \"e39281c60fca9dbf61381d4636be00bb48552de03fa433bd2fdb49dee10fd2e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:54:21.342323 env[1532]: time="2025-08-13T00:54:21.342272791Z" level=info msg="CreateContainer within sandbox \"e39281c60fca9dbf61381d4636be00bb48552de03fa433bd2fdb49dee10fd2e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3cc3b8e29f03ad3cba5b913e88b067583eb66ddf17391c9f92bc570fad628349\"" Aug 13 00:54:21.344651 env[1532]: time="2025-08-13T00:54:21.342804301Z" level=info msg="StartContainer for \"3cc3b8e29f03ad3cba5b913e88b067583eb66ddf17391c9f92bc570fad628349\"" Aug 13 00:54:21.401536 env[1532]: time="2025-08-13T00:54:21.401482753Z" level=info msg="StartContainer for \"3cc3b8e29f03ad3cba5b913e88b067583eb66ddf17391c9f92bc570fad628349\" returns successfully" Aug 13 00:54:22.307298 kubelet[2605]: I0813 00:54:22.307236 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-8xj86" podStartSLOduration=1.846184059 podStartE2EDuration="18.307217189s" podCreationTimestamp="2025-08-13 00:54:04 +0000 UTC" firstStartedPulling="2025-08-13 00:54:04.846176633 +0000 UTC m=+5.748537612" lastFinishedPulling="2025-08-13 00:54:21.307209663 +0000 UTC m=+22.209570742" observedRunningTime="2025-08-13 00:54:22.306897284 +0000 UTC m=+23.209258363" watchObservedRunningTime="2025-08-13 00:54:22.307217189 +0000 UTC m=+23.209578168" Aug 13 00:54:27.875479 sudo[1948]: pam_unix(sudo:session): session closed for user root Aug 13 00:54:27.897696 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 13 00:54:27.897815 kernel: audit: type=1106 audit(1755046467.874:278): pid=1948 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.874000 audit[1948]: USER_END pid=1948 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.874000 audit[1948]: CRED_DISP pid=1948 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.920906 kernel: audit: type=1104 audit(1755046467.874:279): pid=1948 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:54:27.976478 sshd[1944]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:27.977000 audit[1944]: USER_END pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:54:28.000015 kernel: audit: type=1106 audit(1755046467.977:280): pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:54:27.980035 systemd[1]: sshd@6-10.200.4.17:22-10.200.16.10:44230.service: Deactivated successfully. Aug 13 00:54:27.980847 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:54:28.000555 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:54:28.001652 systemd-logind[1516]: Removed session 9. Aug 13 00:54:27.977000 audit[1944]: CRED_DISP pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:54:28.025020 kernel: audit: type=1104 audit(1755046467.977:281): pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:54:27.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.17:22-10.200.16.10:44230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.043884 kernel: audit: type=1131 audit(1755046467.979:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.17:22-10.200.16.10:44230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:28.633000 audit[3000]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:28.646881 kernel: audit: type=1325 audit(1755046468.633:283): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:28.633000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe1f7d5dd0 a2=0 a3=7ffe1f7d5dbc items=0 ppid=2726 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:28.668898 kernel: audit: type=1300 audit(1755046468.633:283): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe1f7d5dd0 a2=0 a3=7ffe1f7d5dbc items=0 ppid=2726 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:28.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:28.687878 kernel: audit: type=1327 audit(1755046468.633:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:28.669000 audit[3000]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:28.715884 kernel: audit: type=1325 audit(1755046468.669:284): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:28.669000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe1f7d5dd0 a2=0 a3=0 items=0 ppid=2726 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:28.748883 kernel: audit: type=1300 audit(1755046468.669:284): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe1f7d5dd0 a2=0 a3=0 items=0 ppid=2726 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:28.669000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:28.847000 audit[3002]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:28.847000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb519fa60 a2=0 a3=7fffb519fa4c items=0 ppid=2726 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:28.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:28.851000 audit[3002]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:28.851000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb519fa60 a2=0 a3=0 items=0 ppid=2726 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:28.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:31.735000 audit[3004]: NETFILTER_CFG table=filter:96 family=2 entries=17 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:31.735000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff40fdf4c0 a2=0 a3=7fff40fdf4ac items=0 ppid=2726 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:31.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:31.740000 audit[3004]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:31.740000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff40fdf4c0 a2=0 a3=0 items=0 ppid=2726 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:31.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:31.758000 audit[3006]: NETFILTER_CFG table=filter:98 family=2 entries=18 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:31.758000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcf19ba0c0 a2=0 a3=7ffcf19ba0ac items=0 ppid=2726 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:31.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:31.765000 audit[3006]: NETFILTER_CFG table=nat:99 family=2 entries=12 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:31.765000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf19ba0c0 a2=0 a3=0 items=0 ppid=2726 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:31.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:32.028389 kubelet[2605]: I0813 00:54:32.028265 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/132e462c-54e3-4110-9148-a8673fc3c046-typha-certs\") pod \"calico-typha-55c54485cc-z42h5\" (UID: \"132e462c-54e3-4110-9148-a8673fc3c046\") " pod="calico-system/calico-typha-55c54485cc-z42h5" Aug 13 00:54:32.028878 kubelet[2605]: I0813 00:54:32.028411 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xbfv\" (UniqueName: \"kubernetes.io/projected/132e462c-54e3-4110-9148-a8673fc3c046-kube-api-access-5xbfv\") pod \"calico-typha-55c54485cc-z42h5\" (UID: \"132e462c-54e3-4110-9148-a8673fc3c046\") " pod="calico-system/calico-typha-55c54485cc-z42h5" Aug 13 00:54:32.028878 kubelet[2605]: I0813 00:54:32.028448 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/132e462c-54e3-4110-9148-a8673fc3c046-tigera-ca-bundle\") pod \"calico-typha-55c54485cc-z42h5\" (UID: \"132e462c-54e3-4110-9148-a8673fc3c046\") " pod="calico-system/calico-typha-55c54485cc-z42h5" Aug 13 00:54:32.230676 kubelet[2605]: I0813 00:54:32.230626 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3578cb7f-8930-49d8-b2ed-82a23431d2ea-lib-modules\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.230676 kubelet[2605]: I0813 00:54:32.230678 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3578cb7f-8930-49d8-b2ed-82a23431d2ea-var-run-calico\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230702 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3578cb7f-8930-49d8-b2ed-82a23431d2ea-cni-net-dir\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230724 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3578cb7f-8930-49d8-b2ed-82a23431d2ea-cni-bin-dir\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230743 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3578cb7f-8930-49d8-b2ed-82a23431d2ea-policysync\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230763 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3578cb7f-8930-49d8-b2ed-82a23431d2ea-tigera-ca-bundle\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230782 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3578cb7f-8930-49d8-b2ed-82a23431d2ea-cni-log-dir\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230805 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3578cb7f-8930-49d8-b2ed-82a23431d2ea-flexvol-driver-host\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230829 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3578cb7f-8930-49d8-b2ed-82a23431d2ea-node-certs\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230852 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3578cb7f-8930-49d8-b2ed-82a23431d2ea-var-lib-calico\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230884 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3578cb7f-8930-49d8-b2ed-82a23431d2ea-xtables-lock\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.231005 kubelet[2605]: I0813 00:54:32.230907 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b287\" (UniqueName: \"kubernetes.io/projected/3578cb7f-8930-49d8-b2ed-82a23431d2ea-kube-api-access-9b287\") pod \"calico-node-rbfw4\" (UID: \"3578cb7f-8930-49d8-b2ed-82a23431d2ea\") " pod="calico-system/calico-node-rbfw4" Aug 13 00:54:32.344240 kubelet[2605]: E0813 00:54:32.344212 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.344426 kubelet[2605]: W0813 00:54:32.344409 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.344517 kubelet[2605]: E0813 00:54:32.344501 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.352698 kubelet[2605]: E0813 00:54:32.351329 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.352698 kubelet[2605]: W0813 00:54:32.351350 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.352698 kubelet[2605]: E0813 00:54:32.351371 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.434988 env[1532]: time="2025-08-13T00:54:32.434491864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rbfw4,Uid:3578cb7f-8930-49d8-b2ed-82a23431d2ea,Namespace:calico-system,Attempt:0,}" Aug 13 00:54:32.436397 kubelet[2605]: E0813 00:54:32.435802 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qf7t2" podUID="5dfe102c-5690-449a-a336-a40d559d5b09" Aug 13 00:54:32.452168 env[1532]: time="2025-08-13T00:54:32.452122320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c54485cc-z42h5,Uid:132e462c-54e3-4110-9148-a8673fc3c046,Namespace:calico-system,Attempt:0,}" Aug 13 00:54:32.478196 env[1532]: time="2025-08-13T00:54:32.478127397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:32.478196 env[1532]: time="2025-08-13T00:54:32.478168897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:32.480559 env[1532]: time="2025-08-13T00:54:32.478182598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:32.480559 env[1532]: time="2025-08-13T00:54:32.479805021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be3b50b665a4ae18f830b7c9d14d4f304da34dbf1e8b60314c133adc7b9822ee pid=3030 runtime=io.containerd.runc.v2 Aug 13 00:54:32.514469 env[1532]: time="2025-08-13T00:54:32.514383222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:32.514692 env[1532]: time="2025-08-13T00:54:32.514658826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:32.514806 env[1532]: time="2025-08-13T00:54:32.514780928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:32.515086 env[1532]: time="2025-08-13T00:54:32.515053032Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daa864ab664c7325307787fae2e5af128b54877df62a1a7bbf42e8f34364800 pid=3055 runtime=io.containerd.runc.v2 Aug 13 00:54:32.531724 kubelet[2605]: E0813 00:54:32.531680 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.531724 kubelet[2605]: W0813 00:54:32.531721 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.531995 kubelet[2605]: E0813 00:54:32.531749 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.532194 kubelet[2605]: E0813 00:54:32.532173 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.532276 kubelet[2605]: W0813 00:54:32.532208 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.532276 kubelet[2605]: E0813 00:54:32.532227 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.533155 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.535193 kubelet[2605]: W0813 00:54:32.533172 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.533188 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.533466 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.535193 kubelet[2605]: W0813 00:54:32.533478 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.533493 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.533712 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.535193 kubelet[2605]: W0813 00:54:32.533723 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.533738 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.533933 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.535193 kubelet[2605]: W0813 00:54:32.533945 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.533957 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.534142 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.535193 kubelet[2605]: W0813 00:54:32.534153 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.534166 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.534346 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.535193 kubelet[2605]: W0813 00:54:32.534357 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.534368 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.534658 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.535193 kubelet[2605]: W0813 00:54:32.534670 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.534684 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.534882 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.535193 kubelet[2605]: W0813 00:54:32.534893 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.534906 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.535086 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.535193 kubelet[2605]: W0813 00:54:32.535097 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.535193 kubelet[2605]: E0813 00:54:32.535110 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.539306 kubelet[2605]: E0813 00:54:32.535665 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.539306 kubelet[2605]: W0813 00:54:32.535679 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.539306 kubelet[2605]: E0813 00:54:32.535692 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.539306 kubelet[2605]: E0813 00:54:32.536099 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.539306 kubelet[2605]: W0813 00:54:32.536111 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.539306 kubelet[2605]: E0813 00:54:32.536125 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.539306 kubelet[2605]: E0813 00:54:32.536772 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.539306 kubelet[2605]: W0813 00:54:32.536784 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.539306 kubelet[2605]: E0813 00:54:32.536799 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.539910 kubelet[2605]: E0813 00:54:32.539804 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.539910 kubelet[2605]: W0813 00:54:32.539822 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.540785 kubelet[2605]: E0813 00:54:32.539852 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.540785 kubelet[2605]: E0813 00:54:32.540280 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.540785 kubelet[2605]: W0813 00:54:32.540292 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.540785 kubelet[2605]: E0813 00:54:32.540305 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.540785 kubelet[2605]: E0813 00:54:32.540501 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.540785 kubelet[2605]: W0813 00:54:32.540511 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.540785 kubelet[2605]: E0813 00:54:32.540523 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.540785 kubelet[2605]: E0813 00:54:32.540696 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.540785 kubelet[2605]: W0813 00:54:32.540707 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.540785 kubelet[2605]: E0813 00:54:32.540717 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.541670 kubelet[2605]: E0813 00:54:32.541395 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.541670 kubelet[2605]: W0813 00:54:32.541410 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.541670 kubelet[2605]: E0813 00:54:32.541423 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.541670 kubelet[2605]: E0813 00:54:32.541610 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.541670 kubelet[2605]: W0813 00:54:32.541634 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.542246 kubelet[2605]: E0813 00:54:32.541648 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.542582 kubelet[2605]: E0813 00:54:32.542568 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.542682 kubelet[2605]: W0813 00:54:32.542666 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.542906 kubelet[2605]: E0813 00:54:32.542751 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.542906 kubelet[2605]: I0813 00:54:32.542790 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5dfe102c-5690-449a-a336-a40d559d5b09-varrun\") pod \"csi-node-driver-qf7t2\" (UID: \"5dfe102c-5690-449a-a336-a40d559d5b09\") " pod="calico-system/csi-node-driver-qf7t2" Aug 13 00:54:32.543195 kubelet[2605]: E0813 00:54:32.543180 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.543279 kubelet[2605]: W0813 00:54:32.543265 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.543358 kubelet[2605]: E0813 00:54:32.543344 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.543525 kubelet[2605]: I0813 00:54:32.543508 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xffs5\" (UniqueName: \"kubernetes.io/projected/5dfe102c-5690-449a-a336-a40d559d5b09-kube-api-access-xffs5\") pod \"csi-node-driver-qf7t2\" (UID: \"5dfe102c-5690-449a-a336-a40d559d5b09\") " pod="calico-system/csi-node-driver-qf7t2" Aug 13 00:54:32.543773 kubelet[2605]: E0813 00:54:32.543760 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.544090 kubelet[2605]: W0813 00:54:32.544072 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.544194 kubelet[2605]: E0813 00:54:32.544179 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.544482 kubelet[2605]: E0813 00:54:32.544467 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.548316 kubelet[2605]: W0813 00:54:32.548268 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.548451 kubelet[2605]: E0813 00:54:32.548435 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.548684 kubelet[2605]: I0813 00:54:32.548663 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5dfe102c-5690-449a-a336-a40d559d5b09-registration-dir\") pod \"csi-node-driver-qf7t2\" (UID: \"5dfe102c-5690-449a-a336-a40d559d5b09\") " pod="calico-system/csi-node-driver-qf7t2" Aug 13 00:54:32.548928 kubelet[2605]: E0813 00:54:32.548914 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.549026 kubelet[2605]: W0813 00:54:32.549011 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.550202 kubelet[2605]: E0813 00:54:32.550184 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.550514 kubelet[2605]: E0813 00:54:32.550499 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.550624 kubelet[2605]: W0813 00:54:32.550609 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.550706 kubelet[2605]: E0813 00:54:32.550692 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.551028 kubelet[2605]: E0813 00:54:32.551013 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.551122 kubelet[2605]: W0813 00:54:32.551109 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.551219 kubelet[2605]: E0813 00:54:32.551207 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.551379 kubelet[2605]: I0813 00:54:32.551366 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5dfe102c-5690-449a-a336-a40d559d5b09-socket-dir\") pod \"csi-node-driver-qf7t2\" (UID: \"5dfe102c-5690-449a-a336-a40d559d5b09\") " pod="calico-system/csi-node-driver-qf7t2" Aug 13 00:54:32.551583 kubelet[2605]: E0813 00:54:32.551571 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.551681 kubelet[2605]: W0813 00:54:32.551667 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.551759 kubelet[2605]: E0813 00:54:32.551746 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.552038 kubelet[2605]: E0813 00:54:32.552023 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.552131 kubelet[2605]: W0813 00:54:32.552117 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.552205 kubelet[2605]: E0813 00:54:32.552194 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.552486 kubelet[2605]: E0813 00:54:32.552473 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.552638 kubelet[2605]: W0813 00:54:32.552582 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.552741 kubelet[2605]: E0813 00:54:32.552727 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.553036 kubelet[2605]: E0813 00:54:32.553022 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.553130 kubelet[2605]: W0813 00:54:32.553116 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.553226 kubelet[2605]: E0813 00:54:32.553209 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.553581 kubelet[2605]: E0813 00:54:32.553566 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.553687 kubelet[2605]: W0813 00:54:32.553672 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.553780 kubelet[2605]: E0813 00:54:32.553768 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.553971 kubelet[2605]: I0813 00:54:32.553954 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dfe102c-5690-449a-a336-a40d559d5b09-kubelet-dir\") pod \"csi-node-driver-qf7t2\" (UID: \"5dfe102c-5690-449a-a336-a40d559d5b09\") " pod="calico-system/csi-node-driver-qf7t2" Aug 13 00:54:32.554198 kubelet[2605]: E0813 00:54:32.554184 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.554291 kubelet[2605]: W0813 00:54:32.554278 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.554379 kubelet[2605]: E0813 00:54:32.554365 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.554676 kubelet[2605]: E0813 00:54:32.554662 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.554767 kubelet[2605]: W0813 00:54:32.554754 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.554844 kubelet[2605]: E0813 00:54:32.554832 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.561196 kubelet[2605]: E0813 00:54:32.561177 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.561317 kubelet[2605]: W0813 00:54:32.561301 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.561421 kubelet[2605]: E0813 00:54:32.561403 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.572269 env[1532]: time="2025-08-13T00:54:32.572219561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rbfw4,Uid:3578cb7f-8930-49d8-b2ed-82a23431d2ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"be3b50b665a4ae18f830b7c9d14d4f304da34dbf1e8b60314c133adc7b9822ee\"" Aug 13 00:54:32.587893 env[1532]: time="2025-08-13T00:54:32.576998130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:54:32.657012 kubelet[2605]: E0813 00:54:32.656381 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.657012 kubelet[2605]: W0813 00:54:32.656406 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.657012 kubelet[2605]: E0813 00:54:32.656434 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.657012 kubelet[2605]: E0813 00:54:32.656724 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.657012 kubelet[2605]: W0813 00:54:32.656737 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.657012 kubelet[2605]: E0813 00:54:32.656759 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.662080 kubelet[2605]: E0813 00:54:32.661093 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.662080 kubelet[2605]: W0813 00:54:32.661108 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.662080 kubelet[2605]: E0813 00:54:32.661130 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.662080 kubelet[2605]: E0813 00:54:32.661386 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.662080 kubelet[2605]: W0813 00:54:32.661404 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.662080 kubelet[2605]: E0813 00:54:32.661493 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.662080 kubelet[2605]: E0813 00:54:32.661653 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.662080 kubelet[2605]: W0813 00:54:32.661662 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.662080 kubelet[2605]: E0813 00:54:32.661751 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.662080 kubelet[2605]: E0813 00:54:32.661925 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.662080 kubelet[2605]: W0813 00:54:32.661936 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.662080 kubelet[2605]: E0813 00:54:32.662036 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.663166 kubelet[2605]: E0813 00:54:32.662770 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.663166 kubelet[2605]: W0813 00:54:32.662783 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.663166 kubelet[2605]: E0813 00:54:32.662810 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.663166 kubelet[2605]: E0813 00:54:32.663071 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.663166 kubelet[2605]: W0813 00:54:32.663090 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.663459 env[1532]: time="2025-08-13T00:54:32.662785974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c54485cc-z42h5,Uid:132e462c-54e3-4110-9148-a8673fc3c046,Namespace:calico-system,Attempt:0,} returns sandbox id \"3daa864ab664c7325307787fae2e5af128b54877df62a1a7bbf42e8f34364800\"" Aug 13 00:54:32.663515 kubelet[2605]: E0813 00:54:32.663212 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.663515 kubelet[2605]: E0813 00:54:32.663359 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.663515 kubelet[2605]: W0813 00:54:32.663370 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.663515 kubelet[2605]: E0813 00:54:32.663468 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.663687 kubelet[2605]: E0813 00:54:32.663639 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.663687 kubelet[2605]: W0813 00:54:32.663649 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.663776 kubelet[2605]: E0813 00:54:32.663755 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.664929 kubelet[2605]: E0813 00:54:32.663939 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.664929 kubelet[2605]: W0813 00:54:32.663959 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.664929 kubelet[2605]: E0813 00:54:32.664051 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.664929 kubelet[2605]: E0813 00:54:32.664180 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.664929 kubelet[2605]: W0813 00:54:32.664190 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.664929 kubelet[2605]: E0813 00:54:32.664272 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.664929 kubelet[2605]: E0813 00:54:32.664397 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.664929 kubelet[2605]: W0813 00:54:32.664406 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.664929 kubelet[2605]: E0813 00:54:32.664421 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.664929 kubelet[2605]: E0813 00:54:32.664779 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.664929 kubelet[2605]: W0813 00:54:32.664794 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.664929 kubelet[2605]: E0813 00:54:32.664911 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.665491 kubelet[2605]: E0813 00:54:32.665087 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.665491 kubelet[2605]: W0813 00:54:32.665098 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.665491 kubelet[2605]: E0813 00:54:32.665180 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.665491 kubelet[2605]: E0813 00:54:32.665293 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.665491 kubelet[2605]: W0813 00:54:32.665304 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.665491 kubelet[2605]: E0813 00:54:32.665377 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.665753 kubelet[2605]: E0813 00:54:32.665496 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.665753 kubelet[2605]: W0813 00:54:32.665506 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.665753 kubelet[2605]: E0813 00:54:32.665562 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.665931 kubelet[2605]: E0813 00:54:32.665764 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.665931 kubelet[2605]: W0813 00:54:32.665774 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.665931 kubelet[2605]: E0813 00:54:32.665906 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.666079 kubelet[2605]: E0813 00:54:32.666072 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.666128 kubelet[2605]: W0813 00:54:32.666083 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.666128 kubelet[2605]: E0813 00:54:32.666100 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.667227 kubelet[2605]: E0813 00:54:32.666338 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.667227 kubelet[2605]: W0813 00:54:32.666368 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.667227 kubelet[2605]: E0813 00:54:32.666388 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.667227 kubelet[2605]: E0813 00:54:32.666690 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.667227 kubelet[2605]: W0813 00:54:32.666702 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.667227 kubelet[2605]: E0813 00:54:32.666760 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.667227 kubelet[2605]: E0813 00:54:32.666979 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.667227 kubelet[2605]: W0813 00:54:32.666991 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.667227 kubelet[2605]: E0813 00:54:32.667037 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.667651 kubelet[2605]: E0813 00:54:32.667329 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.667651 kubelet[2605]: W0813 00:54:32.667341 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.667651 kubelet[2605]: E0813 00:54:32.667453 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.667651 kubelet[2605]: E0813 00:54:32.667577 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.667651 kubelet[2605]: W0813 00:54:32.667587 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.674062 kubelet[2605]: E0813 00:54:32.667685 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.678917 kubelet[2605]: E0813 00:54:32.678756 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.678917 kubelet[2605]: W0813 00:54:32.678774 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.678917 kubelet[2605]: E0813 00:54:32.678792 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.699163 kubelet[2605]: E0813 00:54:32.699132 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:32.699163 kubelet[2605]: W0813 00:54:32.699150 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:32.699318 kubelet[2605]: E0813 00:54:32.699169 2605 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:32.779000 audit[3168]: NETFILTER_CFG table=filter:100 family=2 entries=20 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:32.779000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffd6e16960 a2=0 a3=7fffd6e1694c items=0 ppid=2726 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:32.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:32.782000 audit[3168]: NETFILTER_CFG table=nat:101 family=2 entries=12 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:32.782000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd6e16960 a2=0 a3=0 items=0 ppid=2726 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:32.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:33.659953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437751871.mount: Deactivated successfully. Aug 13 00:54:33.821171 env[1532]: time="2025-08-13T00:54:33.821122658Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.827835 env[1532]: time="2025-08-13T00:54:33.827757552Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.832211 env[1532]: time="2025-08-13T00:54:33.831891411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.836328 env[1532]: time="2025-08-13T00:54:33.836280074Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.836906 env[1532]: time="2025-08-13T00:54:33.836873282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:54:33.839477 env[1532]: time="2025-08-13T00:54:33.838940511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:54:33.840234 env[1532]: time="2025-08-13T00:54:33.840203429Z" level=info msg="CreateContainer within sandbox \"be3b50b665a4ae18f830b7c9d14d4f304da34dbf1e8b60314c133adc7b9822ee\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:54:33.871049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893934008.mount: Deactivated successfully. Aug 13 00:54:33.902551 env[1532]: time="2025-08-13T00:54:33.902496217Z" level=info msg="CreateContainer within sandbox \"be3b50b665a4ae18f830b7c9d14d4f304da34dbf1e8b60314c133adc7b9822ee\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8866582850087e55e00172ed006561850cc340b6903053bc1aa8ab4825bf5af6\"" Aug 13 00:54:33.903726 env[1532]: time="2025-08-13T00:54:33.903686434Z" level=info msg="StartContainer for \"8866582850087e55e00172ed006561850cc340b6903053bc1aa8ab4825bf5af6\"" Aug 13 00:54:34.003465 env[1532]: time="2025-08-13T00:54:34.003350752Z" level=info msg="StartContainer for \"8866582850087e55e00172ed006561850cc340b6903053bc1aa8ab4825bf5af6\" returns successfully" Aug 13 00:54:34.197168 kubelet[2605]: E0813 00:54:34.197104 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qf7t2" podUID="5dfe102c-5690-449a-a336-a40d559d5b09" Aug 13 00:54:34.869143 env[1532]: time="2025-08-13T00:54:34.869080066Z" level=info msg="shim disconnected" id=8866582850087e55e00172ed006561850cc340b6903053bc1aa8ab4825bf5af6 Aug 13 00:54:34.869143 env[1532]: time="2025-08-13T00:54:34.869140166Z" level=warning msg="cleaning up after shim disconnected" id=8866582850087e55e00172ed006561850cc340b6903053bc1aa8ab4825bf5af6 namespace=k8s.io Aug 13 00:54:34.869143 env[1532]: time="2025-08-13T00:54:34.869158267Z" level=info msg="cleaning up dead shim" Aug 13 00:54:34.878359 env[1532]: time="2025-08-13T00:54:34.878317095Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3222 runtime=io.containerd.runc.v2\n" Aug 13 00:54:36.197554 kubelet[2605]: E0813 00:54:36.197497 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qf7t2" podUID="5dfe102c-5690-449a-a336-a40d559d5b09" Aug 13 00:54:36.981556 env[1532]: time="2025-08-13T00:54:36.981494019Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.987597 env[1532]: time="2025-08-13T00:54:36.987544401Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.992089 env[1532]: time="2025-08-13T00:54:36.992052261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.995274 env[1532]: time="2025-08-13T00:54:36.995240605Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.995681 env[1532]: time="2025-08-13T00:54:36.995649710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:54:36.997488 env[1532]: time="2025-08-13T00:54:36.997458735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:54:37.004821 env[1532]: time="2025-08-13T00:54:37.004661931Z" level=info msg="CreateContainer within sandbox \"3daa864ab664c7325307787fae2e5af128b54877df62a1a7bbf42e8f34364800\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:54:37.060203 env[1532]: time="2025-08-13T00:54:37.060153969Z" level=info msg="CreateContainer within sandbox \"3daa864ab664c7325307787fae2e5af128b54877df62a1a7bbf42e8f34364800\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8ed73493dfa3e3fecd751212f93c60b2c440a531f7bdaec4fda045b999d04153\"" Aug 13 00:54:37.061912 env[1532]: time="2025-08-13T00:54:37.060906779Z" level=info msg="StartContainer for \"8ed73493dfa3e3fecd751212f93c60b2c440a531f7bdaec4fda045b999d04153\"" Aug 13 00:54:37.148414 env[1532]: time="2025-08-13T00:54:37.148352241Z" level=info msg="StartContainer for \"8ed73493dfa3e3fecd751212f93c60b2c440a531f7bdaec4fda045b999d04153\" returns successfully" Aug 13 00:54:37.398082 kubelet[2605]: I0813 00:54:37.398016 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55c54485cc-z42h5" podStartSLOduration=2.078402016 podStartE2EDuration="6.397991259s" podCreationTimestamp="2025-08-13 00:54:31 +0000 UTC" firstStartedPulling="2025-08-13 00:54:32.677082981 +0000 UTC m=+33.579443960" lastFinishedPulling="2025-08-13 00:54:36.996672224 +0000 UTC m=+37.899033203" observedRunningTime="2025-08-13 00:54:37.376060367 +0000 UTC m=+38.278421346" watchObservedRunningTime="2025-08-13 00:54:37.397991259 +0000 UTC m=+38.300352338" Aug 13 00:54:37.467693 kernel: kauditd_printk_skb: 25 callbacks suppressed Aug 13 00:54:37.467825 kernel: audit: type=1325 audit(1755046477.452:293): table=filter:102 family=2 entries=21 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:37.452000 audit[3278]: NETFILTER_CFG table=filter:102 family=2 entries=21 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:37.452000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee490fb60 a2=0 a3=7ffee490fb4c items=0 ppid=2726 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.487894 kernel: audit: type=1300 audit(1755046477.452:293): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee490fb60 a2=0 a3=7ffee490fb4c items=0 ppid=2726 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.488019 kernel: audit: type=1327 audit(1755046477.452:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:37.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:37.497000 audit[3278]: NETFILTER_CFG table=nat:103 family=2 entries=19 op=nft_register_chain pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:37.509274 kernel: audit: type=1325 audit(1755046477.497:294): table=nat:103 family=2 entries=19 op=nft_register_chain pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:37.497000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffee490fb60 a2=0 a3=7ffee490fb4c items=0 ppid=2726 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.497000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:37.536261 kernel: audit: type=1300 audit(1755046477.497:294): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffee490fb60 a2=0 a3=7ffee490fb4c items=0 ppid=2726 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.536415 kernel: audit: type=1327 audit(1755046477.497:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:38.197121 kubelet[2605]: E0813 00:54:38.197073 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qf7t2" podUID="5dfe102c-5690-449a-a336-a40d559d5b09" Aug 13 00:54:40.197819 kubelet[2605]: E0813 00:54:40.197764 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qf7t2" podUID="5dfe102c-5690-449a-a336-a40d559d5b09" Aug 13 00:54:40.629030 env[1532]: time="2025-08-13T00:54:40.628981453Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:40.635092 env[1532]: time="2025-08-13T00:54:40.635046730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:40.642346 env[1532]: time="2025-08-13T00:54:40.642304822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:40.646728 env[1532]: time="2025-08-13T00:54:40.646688077Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:40.647298 env[1532]: time="2025-08-13T00:54:40.647264684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:54:40.650778 env[1532]: time="2025-08-13T00:54:40.650742229Z" level=info msg="CreateContainer within sandbox \"be3b50b665a4ae18f830b7c9d14d4f304da34dbf1e8b60314c133adc7b9822ee\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:54:40.684102 env[1532]: time="2025-08-13T00:54:40.684054750Z" level=info msg="CreateContainer within sandbox \"be3b50b665a4ae18f830b7c9d14d4f304da34dbf1e8b60314c133adc7b9822ee\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"31b9289b013459c82d8cf78c9b920cda31434068f8c0a430b7b60d366dff1941\"" Aug 13 00:54:40.685713 env[1532]: time="2025-08-13T00:54:40.685120564Z" level=info msg="StartContainer for \"31b9289b013459c82d8cf78c9b920cda31434068f8c0a430b7b60d366dff1941\"" Aug 13 00:54:40.752641 env[1532]: time="2025-08-13T00:54:40.752577617Z" level=info msg="StartContainer for \"31b9289b013459c82d8cf78c9b920cda31434068f8c0a430b7b60d366dff1941\" returns successfully" Aug 13 00:54:42.197938 kubelet[2605]: E0813 00:54:42.197877 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qf7t2" podUID="5dfe102c-5690-449a-a336-a40d559d5b09" Aug 13 00:54:42.440073 env[1532]: time="2025-08-13T00:54:42.440003399Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:54:42.465245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31b9289b013459c82d8cf78c9b920cda31434068f8c0a430b7b60d366dff1941-rootfs.mount: Deactivated successfully. Aug 13 00:54:42.527572 kubelet[2605]: I0813 00:54:42.527540 2605 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643335 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/27000197-aa78-41c4-95cb-9d77cedc6876-calico-apiserver-certs\") pod \"calico-apiserver-589fbcc97d-sqb5w\" (UID: \"27000197-aa78-41c4-95cb-9d77cedc6876\") " pod="calico-apiserver/calico-apiserver-589fbcc97d-sqb5w" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643382 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzm4\" (UniqueName: \"kubernetes.io/projected/27000197-aa78-41c4-95cb-9d77cedc6876-kube-api-access-hmzm4\") pod \"calico-apiserver-589fbcc97d-sqb5w\" (UID: \"27000197-aa78-41c4-95cb-9d77cedc6876\") " pod="calico-apiserver/calico-apiserver-589fbcc97d-sqb5w" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643406 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klbmj\" (UniqueName: \"kubernetes.io/projected/30e30d82-15c7-47b1-9012-021e8bd25177-kube-api-access-klbmj\") pod \"coredns-7c65d6cfc9-8nhmz\" (UID: \"30e30d82-15c7-47b1-9012-021e8bd25177\") " pod="kube-system/coredns-7c65d6cfc9-8nhmz" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643432 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktpv\" (UniqueName: \"kubernetes.io/projected/a784665e-e2ea-4562-8f6c-bf4d4c9ab351-kube-api-access-qktpv\") pod \"coredns-7c65d6cfc9-49v42\" (UID: \"a784665e-e2ea-4562-8f6c-bf4d4c9ab351\") " pod="kube-system/coredns-7c65d6cfc9-49v42" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643456 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdt8l\" (UniqueName: \"kubernetes.io/projected/e54aba56-1778-47fd-a5c7-dc83617feb7d-kube-api-access-fdt8l\") pod \"whisker-7cc5458554-jzl57\" (UID: \"e54aba56-1778-47fd-a5c7-dc83617feb7d\") " pod="calico-system/whisker-7cc5458554-jzl57" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643478 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vc9w\" (UniqueName: \"kubernetes.io/projected/2c90ce12-f5c9-423c-8e01-a32bac086304-kube-api-access-5vc9w\") pod \"calico-apiserver-589fbcc97d-2zc84\" (UID: \"2c90ce12-f5c9-423c-8e01-a32bac086304\") " pod="calico-apiserver/calico-apiserver-589fbcc97d-2zc84" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643502 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6c80f539-1a56-46fa-b014-bcb6516c078a-goldmane-key-pair\") pod \"goldmane-58fd7646b9-pkkr2\" (UID: \"6c80f539-1a56-46fa-b014-bcb6516c078a\") " pod="calico-system/goldmane-58fd7646b9-pkkr2" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643528 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e54aba56-1778-47fd-a5c7-dc83617feb7d-whisker-ca-bundle\") pod \"whisker-7cc5458554-jzl57\" (UID: \"e54aba56-1778-47fd-a5c7-dc83617feb7d\") " pod="calico-system/whisker-7cc5458554-jzl57" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643550 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30e30d82-15c7-47b1-9012-021e8bd25177-config-volume\") pod \"coredns-7c65d6cfc9-8nhmz\" (UID: \"30e30d82-15c7-47b1-9012-021e8bd25177\") " pod="kube-system/coredns-7c65d6cfc9-8nhmz" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643573 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b48h\" (UniqueName: \"kubernetes.io/projected/0d0c4b64-0bfe-4382-b882-b2136806044c-kube-api-access-2b48h\") pod \"calico-kube-controllers-7987d7d768-cg7mk\" (UID: \"0d0c4b64-0bfe-4382-b882-b2136806044c\") " pod="calico-system/calico-kube-controllers-7987d7d768-cg7mk" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643597 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a784665e-e2ea-4562-8f6c-bf4d4c9ab351-config-volume\") pod \"coredns-7c65d6cfc9-49v42\" (UID: \"a784665e-e2ea-4562-8f6c-bf4d4c9ab351\") " pod="kube-system/coredns-7c65d6cfc9-49v42" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643622 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d0c4b64-0bfe-4382-b882-b2136806044c-tigera-ca-bundle\") pod \"calico-kube-controllers-7987d7d768-cg7mk\" (UID: \"0d0c4b64-0bfe-4382-b882-b2136806044c\") " pod="calico-system/calico-kube-controllers-7987d7d768-cg7mk" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643647 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2c90ce12-f5c9-423c-8e01-a32bac086304-calico-apiserver-certs\") pod \"calico-apiserver-589fbcc97d-2zc84\" (UID: \"2c90ce12-f5c9-423c-8e01-a32bac086304\") " pod="calico-apiserver/calico-apiserver-589fbcc97d-2zc84" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643673 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c80f539-1a56-46fa-b014-bcb6516c078a-config\") pod \"goldmane-58fd7646b9-pkkr2\" (UID: \"6c80f539-1a56-46fa-b014-bcb6516c078a\") " pod="calico-system/goldmane-58fd7646b9-pkkr2" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643694 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c80f539-1a56-46fa-b014-bcb6516c078a-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-pkkr2\" (UID: \"6c80f539-1a56-46fa-b014-bcb6516c078a\") " pod="calico-system/goldmane-58fd7646b9-pkkr2" Aug 13 00:54:42.645754 kubelet[2605]: I0813 00:54:42.643717 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnfkb\" (UniqueName: \"kubernetes.io/projected/6c80f539-1a56-46fa-b014-bcb6516c078a-kube-api-access-jnfkb\") pod \"goldmane-58fd7646b9-pkkr2\" (UID: \"6c80f539-1a56-46fa-b014-bcb6516c078a\") " pod="calico-system/goldmane-58fd7646b9-pkkr2" Aug 13 00:54:42.646489 kubelet[2605]: I0813 00:54:42.643744 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e54aba56-1778-47fd-a5c7-dc83617feb7d-whisker-backend-key-pair\") pod \"whisker-7cc5458554-jzl57\" (UID: \"e54aba56-1778-47fd-a5c7-dc83617feb7d\") " pod="calico-system/whisker-7cc5458554-jzl57" Aug 13 00:54:42.866976 env[1532]: time="2025-08-13T00:54:42.866919134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8nhmz,Uid:30e30d82-15c7-47b1-9012-021e8bd25177,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:42.871538 env[1532]: time="2025-08-13T00:54:42.871494090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cc5458554-jzl57,Uid:e54aba56-1778-47fd-a5c7-dc83617feb7d,Namespace:calico-system,Attempt:0,}" Aug 13 00:54:42.877539 env[1532]: time="2025-08-13T00:54:42.877506763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-49v42,Uid:a784665e-e2ea-4562-8f6c-bf4d4c9ab351,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:42.883110 env[1532]: time="2025-08-13T00:54:42.883083432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589fbcc97d-sqb5w,Uid:27000197-aa78-41c4-95cb-9d77cedc6876,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:54:42.885838 env[1532]: time="2025-08-13T00:54:42.885638563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7987d7d768-cg7mk,Uid:0d0c4b64-0bfe-4382-b882-b2136806044c,Namespace:calico-system,Attempt:0,}" Aug 13 00:54:42.888880 env[1532]: time="2025-08-13T00:54:42.888648400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589fbcc97d-2zc84,Uid:2c90ce12-f5c9-423c-8e01-a32bac086304,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:54:42.893148 env[1532]: time="2025-08-13T00:54:42.893118055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pkkr2,Uid:6c80f539-1a56-46fa-b014-bcb6516c078a,Namespace:calico-system,Attempt:0,}" Aug 13 00:54:44.332822 env[1532]: time="2025-08-13T00:54:44.330120168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qf7t2,Uid:5dfe102c-5690-449a-a336-a40d559d5b09,Namespace:calico-system,Attempt:0,}" Aug 13 00:54:44.363941 env[1532]: time="2025-08-13T00:54:44.363890370Z" level=info msg="shim disconnected" id=31b9289b013459c82d8cf78c9b920cda31434068f8c0a430b7b60d366dff1941 Aug 13 00:54:44.363941 env[1532]: time="2025-08-13T00:54:44.363937570Z" level=warning msg="cleaning up after shim disconnected" id=31b9289b013459c82d8cf78c9b920cda31434068f8c0a430b7b60d366dff1941 namespace=k8s.io Aug 13 00:54:44.363941 env[1532]: time="2025-08-13T00:54:44.363948470Z" level=info msg="cleaning up dead shim" Aug 13 00:54:44.372320 env[1532]: time="2025-08-13T00:54:44.372279169Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3340 runtime=io.containerd.runc.v2\n" Aug 13 00:54:44.778093 env[1532]: time="2025-08-13T00:54:44.777935194Z" level=error msg="Failed to destroy network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.778967 env[1532]: time="2025-08-13T00:54:44.778916406Z" level=error msg="encountered an error cleaning up failed sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.779090 env[1532]: time="2025-08-13T00:54:44.778996807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cc5458554-jzl57,Uid:e54aba56-1778-47fd-a5c7-dc83617feb7d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.779422 kubelet[2605]: E0813 00:54:44.779373 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.779842 kubelet[2605]: E0813 00:54:44.779469 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cc5458554-jzl57" Aug 13 00:54:44.779842 kubelet[2605]: E0813 00:54:44.779497 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cc5458554-jzl57" Aug 13 00:54:44.779842 kubelet[2605]: E0813 00:54:44.779568 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7cc5458554-jzl57_calico-system(e54aba56-1778-47fd-a5c7-dc83617feb7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7cc5458554-jzl57_calico-system(e54aba56-1778-47fd-a5c7-dc83617feb7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cc5458554-jzl57" podUID="e54aba56-1778-47fd-a5c7-dc83617feb7d" Aug 13 00:54:44.804622 env[1532]: time="2025-08-13T00:54:44.804510810Z" level=error msg="Failed to destroy network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.805255 env[1532]: time="2025-08-13T00:54:44.805207318Z" level=error msg="encountered an error cleaning up failed sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.805370 env[1532]: time="2025-08-13T00:54:44.805304419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8nhmz,Uid:30e30d82-15c7-47b1-9012-021e8bd25177,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.805640 kubelet[2605]: E0813 00:54:44.805575 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.805738 kubelet[2605]: E0813 00:54:44.805715 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8nhmz" Aug 13 00:54:44.805798 kubelet[2605]: E0813 00:54:44.805759 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8nhmz" Aug 13 00:54:44.805890 kubelet[2605]: E0813 00:54:44.805819 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8nhmz_kube-system(30e30d82-15c7-47b1-9012-021e8bd25177)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8nhmz_kube-system(30e30d82-15c7-47b1-9012-021e8bd25177)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8nhmz" podUID="30e30d82-15c7-47b1-9012-021e8bd25177" Aug 13 00:54:44.833401 env[1532]: time="2025-08-13T00:54:44.833337253Z" level=error msg="Failed to destroy network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.833800 env[1532]: time="2025-08-13T00:54:44.833724857Z" level=error msg="encountered an error cleaning up failed sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.833931 env[1532]: time="2025-08-13T00:54:44.833797758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-49v42,Uid:a784665e-e2ea-4562-8f6c-bf4d4c9ab351,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.834093 kubelet[2605]: E0813 00:54:44.834053 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.834177 kubelet[2605]: E0813 00:54:44.834129 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-49v42" Aug 13 00:54:44.834177 kubelet[2605]: E0813 00:54:44.834158 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-49v42" Aug 13 00:54:44.834264 kubelet[2605]: E0813 00:54:44.834212 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-49v42_kube-system(a784665e-e2ea-4562-8f6c-bf4d4c9ab351)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-49v42_kube-system(a784665e-e2ea-4562-8f6c-bf4d4c9ab351)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-49v42" podUID="a784665e-e2ea-4562-8f6c-bf4d4c9ab351" Aug 13 00:54:44.836824 env[1532]: time="2025-08-13T00:54:44.836777694Z" level=error msg="Failed to destroy network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.837396 env[1532]: time="2025-08-13T00:54:44.837341100Z" level=error msg="encountered an error cleaning up failed sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.837592 env[1532]: time="2025-08-13T00:54:44.837545203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7987d7d768-cg7mk,Uid:0d0c4b64-0bfe-4382-b882-b2136806044c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.838034 kubelet[2605]: E0813 00:54:44.837999 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.838142 kubelet[2605]: E0813 00:54:44.838062 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7987d7d768-cg7mk" Aug 13 00:54:44.838142 kubelet[2605]: E0813 00:54:44.838088 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7987d7d768-cg7mk" Aug 13 00:54:44.838799 kubelet[2605]: E0813 00:54:44.838136 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7987d7d768-cg7mk_calico-system(0d0c4b64-0bfe-4382-b882-b2136806044c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7987d7d768-cg7mk_calico-system(0d0c4b64-0bfe-4382-b882-b2136806044c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7987d7d768-cg7mk" podUID="0d0c4b64-0bfe-4382-b882-b2136806044c" Aug 13 00:54:44.878877 env[1532]: time="2025-08-13T00:54:44.878794293Z" level=error msg="Failed to destroy network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.879319 env[1532]: time="2025-08-13T00:54:44.879264699Z" level=error msg="encountered an error cleaning up failed sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.879438 env[1532]: time="2025-08-13T00:54:44.879344900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589fbcc97d-sqb5w,Uid:27000197-aa78-41c4-95cb-9d77cedc6876,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.879604 kubelet[2605]: E0813 00:54:44.879565 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.879695 kubelet[2605]: E0813 00:54:44.879637 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589fbcc97d-sqb5w" Aug 13 00:54:44.879695 kubelet[2605]: E0813 00:54:44.879664 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589fbcc97d-sqb5w" Aug 13 00:54:44.879796 kubelet[2605]: E0813 00:54:44.879718 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589fbcc97d-sqb5w_calico-apiserver(27000197-aa78-41c4-95cb-9d77cedc6876)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589fbcc97d-sqb5w_calico-apiserver(27000197-aa78-41c4-95cb-9d77cedc6876)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589fbcc97d-sqb5w" podUID="27000197-aa78-41c4-95cb-9d77cedc6876" Aug 13 00:54:44.904520 env[1532]: time="2025-08-13T00:54:44.904460599Z" level=error msg="Failed to destroy network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.904848 env[1532]: time="2025-08-13T00:54:44.904808303Z" level=error msg="encountered an error cleaning up failed sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.904981 env[1532]: time="2025-08-13T00:54:44.904886004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589fbcc97d-2zc84,Uid:2c90ce12-f5c9-423c-8e01-a32bac086304,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.905126 kubelet[2605]: E0813 00:54:44.905088 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.905252 kubelet[2605]: E0813 00:54:44.905155 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589fbcc97d-2zc84" Aug 13 00:54:44.905252 kubelet[2605]: E0813 00:54:44.905182 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589fbcc97d-2zc84" Aug 13 00:54:44.905252 kubelet[2605]: E0813 00:54:44.905230 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589fbcc97d-2zc84_calico-apiserver(2c90ce12-f5c9-423c-8e01-a32bac086304)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589fbcc97d-2zc84_calico-apiserver(2c90ce12-f5c9-423c-8e01-a32bac086304)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589fbcc97d-2zc84" podUID="2c90ce12-f5c9-423c-8e01-a32bac086304" Aug 13 00:54:44.916450 env[1532]: time="2025-08-13T00:54:44.916390741Z" level=error msg="Failed to destroy network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.917094 env[1532]: time="2025-08-13T00:54:44.917046648Z" level=error msg="encountered an error cleaning up failed sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.917288 env[1532]: time="2025-08-13T00:54:44.917243151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qf7t2,Uid:5dfe102c-5690-449a-a336-a40d559d5b09,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.917878 kubelet[2605]: E0813 00:54:44.917561 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.917878 kubelet[2605]: E0813 00:54:44.917624 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qf7t2" Aug 13 00:54:44.917878 kubelet[2605]: E0813 00:54:44.917653 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qf7t2" Aug 13 00:54:44.917878 kubelet[2605]: E0813 00:54:44.917700 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qf7t2_calico-system(5dfe102c-5690-449a-a336-a40d559d5b09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qf7t2_calico-system(5dfe102c-5690-449a-a336-a40d559d5b09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qf7t2" podUID="5dfe102c-5690-449a-a336-a40d559d5b09" Aug 13 00:54:44.922326 env[1532]: time="2025-08-13T00:54:44.922265810Z" level=error msg="Failed to destroy network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.922703 env[1532]: time="2025-08-13T00:54:44.922661315Z" level=error msg="encountered an error cleaning up failed sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.922784 env[1532]: time="2025-08-13T00:54:44.922714116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pkkr2,Uid:6c80f539-1a56-46fa-b014-bcb6516c078a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.922961 kubelet[2605]: E0813 00:54:44.922919 2605 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:44.923053 kubelet[2605]: E0813 00:54:44.922982 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-pkkr2" Aug 13 00:54:44.923053 kubelet[2605]: E0813 00:54:44.923007 2605 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-pkkr2" Aug 13 00:54:44.923144 kubelet[2605]: E0813 00:54:44.923054 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-pkkr2_calico-system(6c80f539-1a56-46fa-b014-bcb6516c078a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-pkkr2_calico-system(6c80f539-1a56-46fa-b014-bcb6516c078a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-pkkr2" podUID="6c80f539-1a56-46fa-b014-bcb6516c078a" Aug 13 00:54:45.362386 kubelet[2605]: I0813 00:54:45.362349 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:54:45.363460 env[1532]: time="2025-08-13T00:54:45.363395594Z" level=info msg="StopPodSandbox for \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\"" Aug 13 00:54:45.366708 kubelet[2605]: I0813 00:54:45.366144 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:54:45.367172 env[1532]: time="2025-08-13T00:54:45.367119337Z" level=info msg="StopPodSandbox for \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\"" Aug 13 00:54:45.369333 kubelet[2605]: I0813 00:54:45.369233 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:45.369844 env[1532]: time="2025-08-13T00:54:45.369813069Z" level=info msg="StopPodSandbox for \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\"" Aug 13 00:54:45.371212 kubelet[2605]: I0813 00:54:45.371190 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:54:45.372048 env[1532]: time="2025-08-13T00:54:45.372017495Z" level=info msg="StopPodSandbox for \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\"" Aug 13 00:54:45.374901 kubelet[2605]: I0813 00:54:45.374826 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:54:45.375477 env[1532]: time="2025-08-13T00:54:45.375428735Z" level=info msg="StopPodSandbox for \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\"" Aug 13 00:54:45.380569 env[1532]: time="2025-08-13T00:54:45.380538495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:54:45.396942 kubelet[2605]: I0813 00:54:45.396445 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:54:45.397271 env[1532]: time="2025-08-13T00:54:45.397240190Z" level=info msg="StopPodSandbox for \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\"" Aug 13 00:54:45.411298 kubelet[2605]: I0813 00:54:45.410973 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:54:45.414199 env[1532]: time="2025-08-13T00:54:45.414163989Z" level=info msg="StopPodSandbox for \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\"" Aug 13 00:54:45.415482 kubelet[2605]: I0813 00:54:45.415457 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:54:45.416285 env[1532]: time="2025-08-13T00:54:45.416240413Z" level=info msg="StopPodSandbox for \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\"" Aug 13 00:54:45.438797 env[1532]: time="2025-08-13T00:54:45.438734777Z" level=error msg="StopPodSandbox for \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\" failed" error="failed to destroy network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:45.439524 kubelet[2605]: E0813 00:54:45.439162 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:54:45.439524 kubelet[2605]: E0813 00:54:45.439295 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3"} Aug 13 00:54:45.439524 kubelet[2605]: E0813 00:54:45.439400 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5dfe102c-5690-449a-a336-a40d559d5b09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:54:45.439524 kubelet[2605]: E0813 00:54:45.439452 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5dfe102c-5690-449a-a336-a40d559d5b09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qf7t2" podUID="5dfe102c-5690-449a-a336-a40d559d5b09" Aug 13 00:54:45.478956 env[1532]: time="2025-08-13T00:54:45.478890647Z" level=error msg="StopPodSandbox for \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\" failed" error="failed to destroy network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:45.479589 kubelet[2605]: E0813 00:54:45.479539 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:45.479734 kubelet[2605]: E0813 00:54:45.479604 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831"} Aug 13 00:54:45.479734 kubelet[2605]: E0813 00:54:45.479647 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d0c4b64-0bfe-4382-b882-b2136806044c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:54:45.479734 kubelet[2605]: E0813 00:54:45.479676 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d0c4b64-0bfe-4382-b882-b2136806044c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7987d7d768-cg7mk" podUID="0d0c4b64-0bfe-4382-b882-b2136806044c" Aug 13 00:54:45.508687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e-shm.mount: Deactivated successfully. Aug 13 00:54:45.508904 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a-shm.mount: Deactivated successfully. Aug 13 00:54:45.509038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247-shm.mount: Deactivated successfully. Aug 13 00:54:45.518082 env[1532]: time="2025-08-13T00:54:45.518025406Z" level=error msg="StopPodSandbox for \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\" failed" error="failed to destroy network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:45.518507 kubelet[2605]: E0813 00:54:45.518293 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:54:45.518507 kubelet[2605]: E0813 00:54:45.518372 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02"} Aug 13 00:54:45.518507 kubelet[2605]: E0813 00:54:45.518431 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c80f539-1a56-46fa-b014-bcb6516c078a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:54:45.518507 kubelet[2605]: E0813 00:54:45.518461 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c80f539-1a56-46fa-b014-bcb6516c078a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-pkkr2" podUID="6c80f539-1a56-46fa-b014-bcb6516c078a" Aug 13 00:54:45.533069 env[1532]: time="2025-08-13T00:54:45.532993281Z" level=error msg="StopPodSandbox for \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\" failed" error="failed to destroy network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:45.533679 kubelet[2605]: E0813 00:54:45.533627 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:54:45.533823 kubelet[2605]: E0813 00:54:45.533692 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a"} Aug 13 00:54:45.533823 kubelet[2605]: E0813 00:54:45.533750 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30e30d82-15c7-47b1-9012-021e8bd25177\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:54:45.533823 kubelet[2605]: E0813 00:54:45.533783 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30e30d82-15c7-47b1-9012-021e8bd25177\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8nhmz" podUID="30e30d82-15c7-47b1-9012-021e8bd25177" Aug 13 00:54:45.567873 env[1532]: time="2025-08-13T00:54:45.567783389Z" level=error msg="StopPodSandbox for \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\" failed" error="failed to destroy network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:45.568160 kubelet[2605]: E0813 00:54:45.568105 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:54:45.568274 kubelet[2605]: E0813 00:54:45.568177 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e"} Aug 13 00:54:45.568274 kubelet[2605]: E0813 00:54:45.568227 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a784665e-e2ea-4562-8f6c-bf4d4c9ab351\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:54:45.568274 kubelet[2605]: E0813 00:54:45.568259 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a784665e-e2ea-4562-8f6c-bf4d4c9ab351\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-49v42" podUID="a784665e-e2ea-4562-8f6c-bf4d4c9ab351" Aug 13 00:54:45.577972 env[1532]: time="2025-08-13T00:54:45.577911307Z" level=error msg="StopPodSandbox for \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\" failed" error="failed to destroy network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:45.578552 kubelet[2605]: E0813 00:54:45.578368 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:54:45.578552 kubelet[2605]: E0813 00:54:45.578437 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47"} Aug 13 00:54:45.578552 kubelet[2605]: E0813 00:54:45.578480 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c90ce12-f5c9-423c-8e01-a32bac086304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:54:45.578552 kubelet[2605]: E0813 00:54:45.578520 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c90ce12-f5c9-423c-8e01-a32bac086304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589fbcc97d-2zc84" podUID="2c90ce12-f5c9-423c-8e01-a32bac086304" Aug 13 00:54:45.588436 env[1532]: time="2025-08-13T00:54:45.588370730Z" level=error msg="StopPodSandbox for \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\" failed" error="failed to destroy network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:45.589030 kubelet[2605]: E0813 00:54:45.588986 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:54:45.589147 kubelet[2605]: E0813 00:54:45.589063 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247"} Aug 13 00:54:45.589147 kubelet[2605]: E0813 00:54:45.589121 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e54aba56-1778-47fd-a5c7-dc83617feb7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:54:45.589268 kubelet[2605]: E0813 00:54:45.589158 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e54aba56-1778-47fd-a5c7-dc83617feb7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cc5458554-jzl57" podUID="e54aba56-1778-47fd-a5c7-dc83617feb7d" Aug 13 00:54:45.590004 env[1532]: time="2025-08-13T00:54:45.589956349Z" level=error msg="StopPodSandbox for \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\" failed" error="failed to destroy network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:54:45.590235 kubelet[2605]: E0813 00:54:45.590196 2605 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:54:45.590325 kubelet[2605]: E0813 00:54:45.590246 2605 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211"} Aug 13 00:54:45.590325 kubelet[2605]: E0813 00:54:45.590286 2605 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27000197-aa78-41c4-95cb-9d77cedc6876\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:54:45.590325 kubelet[2605]: E0813 00:54:45.590313 2605 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27000197-aa78-41c4-95cb-9d77cedc6876\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589fbcc97d-sqb5w" podUID="27000197-aa78-41c4-95cb-9d77cedc6876" Aug 13 00:54:51.975514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345707306.mount: Deactivated successfully. Aug 13 00:54:52.012826 env[1532]: time="2025-08-13T00:54:52.012773808Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:52.017842 env[1532]: time="2025-08-13T00:54:52.017799061Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:52.021672 env[1532]: time="2025-08-13T00:54:52.021642202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:52.026901 env[1532]: time="2025-08-13T00:54:52.026851857Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:52.027248 env[1532]: time="2025-08-13T00:54:52.027216061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:54:52.046672 env[1532]: time="2025-08-13T00:54:52.046636868Z" level=info msg="CreateContainer within sandbox \"be3b50b665a4ae18f830b7c9d14d4f304da34dbf1e8b60314c133adc7b9822ee\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:54:52.096377 env[1532]: time="2025-08-13T00:54:52.096318996Z" level=info msg="CreateContainer within sandbox \"be3b50b665a4ae18f830b7c9d14d4f304da34dbf1e8b60314c133adc7b9822ee\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"994264523b6dc9d632b7e5ed1d5fc06d05c047e11318ed3b142dfb8736f5bc0e\"" Aug 13 00:54:52.097258 env[1532]: time="2025-08-13T00:54:52.097223706Z" level=info msg="StartContainer for \"994264523b6dc9d632b7e5ed1d5fc06d05c047e11318ed3b142dfb8736f5bc0e\"" Aug 13 00:54:52.165106 env[1532]: time="2025-08-13T00:54:52.165048527Z" level=info msg="StartContainer for \"994264523b6dc9d632b7e5ed1d5fc06d05c047e11318ed3b142dfb8736f5bc0e\" returns successfully" Aug 13 00:54:52.457296 kubelet[2605]: I0813 00:54:52.457233 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rbfw4" podStartSLOduration=1.002923347 podStartE2EDuration="20.457214634s" podCreationTimestamp="2025-08-13 00:54:32 +0000 UTC" firstStartedPulling="2025-08-13 00:54:32.573840584 +0000 UTC m=+33.476201563" lastFinishedPulling="2025-08-13 00:54:52.028131871 +0000 UTC m=+52.930492850" observedRunningTime="2025-08-13 00:54:52.456809329 +0000 UTC m=+53.359170408" watchObservedRunningTime="2025-08-13 00:54:52.457214634 +0000 UTC m=+53.359575713" Aug 13 00:54:52.637803 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:54:52.637979 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:54:52.766131 env[1532]: time="2025-08-13T00:54:52.766004417Z" level=info msg="StopPodSandbox for \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\"" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.861 [INFO][3780] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.861 [INFO][3780] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" iface="eth0" netns="/var/run/netns/cni-5c75124c-0a1b-e656-5432-c221b51260d1" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.861 [INFO][3780] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" iface="eth0" netns="/var/run/netns/cni-5c75124c-0a1b-e656-5432-c221b51260d1" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.862 [INFO][3780] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" iface="eth0" netns="/var/run/netns/cni-5c75124c-0a1b-e656-5432-c221b51260d1" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.862 [INFO][3780] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.862 [INFO][3780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.908 [INFO][3791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" HandleID="k8s-pod-network.79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.909 [INFO][3791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.909 [INFO][3791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.915 [WARNING][3791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" HandleID="k8s-pod-network.79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.915 [INFO][3791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" HandleID="k8s-pod-network.79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.917 [INFO][3791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:52.922619 env[1532]: 2025-08-13 00:54:52.921 [INFO][3780] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:54:52.923464 env[1532]: time="2025-08-13T00:54:52.922760384Z" level=info msg="TearDown network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\" successfully" Aug 13 00:54:52.923464 env[1532]: time="2025-08-13T00:54:52.922801784Z" level=info msg="StopPodSandbox for \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\" returns successfully" Aug 13 00:54:52.972491 systemd[1]: run-netns-cni\x2d5c75124c\x2d0a1b\x2de656\x2d5432\x2dc221b51260d1.mount: Deactivated successfully. Aug 13 00:54:53.022148 kubelet[2605]: I0813 00:54:53.020716 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdt8l\" (UniqueName: \"kubernetes.io/projected/e54aba56-1778-47fd-a5c7-dc83617feb7d-kube-api-access-fdt8l\") pod \"e54aba56-1778-47fd-a5c7-dc83617feb7d\" (UID: \"e54aba56-1778-47fd-a5c7-dc83617feb7d\") " Aug 13 00:54:53.022148 kubelet[2605]: I0813 00:54:53.020987 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e54aba56-1778-47fd-a5c7-dc83617feb7d-whisker-backend-key-pair\") pod \"e54aba56-1778-47fd-a5c7-dc83617feb7d\" (UID: \"e54aba56-1778-47fd-a5c7-dc83617feb7d\") " Aug 13 00:54:53.022148 kubelet[2605]: I0813 00:54:53.021051 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e54aba56-1778-47fd-a5c7-dc83617feb7d-whisker-ca-bundle\") pod \"e54aba56-1778-47fd-a5c7-dc83617feb7d\" (UID: \"e54aba56-1778-47fd-a5c7-dc83617feb7d\") " Aug 13 00:54:53.022148 kubelet[2605]: I0813 00:54:53.021537 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54aba56-1778-47fd-a5c7-dc83617feb7d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e54aba56-1778-47fd-a5c7-dc83617feb7d" (UID: "e54aba56-1778-47fd-a5c7-dc83617feb7d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:54:53.028569 systemd[1]: var-lib-kubelet-pods-e54aba56\x2d1778\x2d47fd\x2da5c7\x2ddc83617feb7d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:54:53.033778 kubelet[2605]: I0813 00:54:53.033033 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54aba56-1778-47fd-a5c7-dc83617feb7d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e54aba56-1778-47fd-a5c7-dc83617feb7d" (UID: "e54aba56-1778-47fd-a5c7-dc83617feb7d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:54:53.035255 systemd[1]: var-lib-kubelet-pods-e54aba56\x2d1778\x2d47fd\x2da5c7\x2ddc83617feb7d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfdt8l.mount: Deactivated successfully. Aug 13 00:54:53.036657 kubelet[2605]: I0813 00:54:53.036617 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54aba56-1778-47fd-a5c7-dc83617feb7d-kube-api-access-fdt8l" (OuterVolumeSpecName: "kube-api-access-fdt8l") pod "e54aba56-1778-47fd-a5c7-dc83617feb7d" (UID: "e54aba56-1778-47fd-a5c7-dc83617feb7d"). InnerVolumeSpecName "kube-api-access-fdt8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:54:53.121800 kubelet[2605]: I0813 00:54:53.121737 2605 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e54aba56-1778-47fd-a5c7-dc83617feb7d-whisker-backend-key-pair\") on node \"ci-3510.3.8-a-1859c445b4\" DevicePath \"\"" Aug 13 00:54:53.121800 kubelet[2605]: I0813 00:54:53.121787 2605 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdt8l\" (UniqueName: \"kubernetes.io/projected/e54aba56-1778-47fd-a5c7-dc83617feb7d-kube-api-access-fdt8l\") on node \"ci-3510.3.8-a-1859c445b4\" DevicePath \"\"" Aug 13 00:54:53.121800 kubelet[2605]: I0813 00:54:53.121804 2605 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e54aba56-1778-47fd-a5c7-dc83617feb7d-whisker-ca-bundle\") on node \"ci-3510.3.8-a-1859c445b4\" DevicePath \"\"" Aug 13 00:54:53.625103 kubelet[2605]: I0813 00:54:53.625057 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f131020-c3bc-40ec-b0c1-a43fefac6f5f-whisker-ca-bundle\") pod \"whisker-6cf566484b-n65qf\" (UID: \"0f131020-c3bc-40ec-b0c1-a43fefac6f5f\") " pod="calico-system/whisker-6cf566484b-n65qf" Aug 13 00:54:53.625852 kubelet[2605]: I0813 00:54:53.625735 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf2jq\" (UniqueName: \"kubernetes.io/projected/0f131020-c3bc-40ec-b0c1-a43fefac6f5f-kube-api-access-tf2jq\") pod \"whisker-6cf566484b-n65qf\" (UID: \"0f131020-c3bc-40ec-b0c1-a43fefac6f5f\") " pod="calico-system/whisker-6cf566484b-n65qf" Aug 13 00:54:53.626098 kubelet[2605]: I0813 00:54:53.626080 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f131020-c3bc-40ec-b0c1-a43fefac6f5f-whisker-backend-key-pair\") pod \"whisker-6cf566484b-n65qf\" (UID: \"0f131020-c3bc-40ec-b0c1-a43fefac6f5f\") " pod="calico-system/whisker-6cf566484b-n65qf" Aug 13 00:54:53.841795 env[1532]: time="2025-08-13T00:54:53.841741642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf566484b-n65qf,Uid:0f131020-c3bc-40ec-b0c1-a43fefac6f5f,Namespace:calico-system,Attempt:0,}" Aug 13 00:54:53.978012 systemd[1]: run-containerd-runc-k8s.io-994264523b6dc9d632b7e5ed1d5fc06d05c047e11318ed3b142dfb8736f5bc0e-runc.NdPJiG.mount: Deactivated successfully. Aug 13 00:54:54.034886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali802a18ecd91: link becomes ready Aug 13 00:54:54.037035 systemd-networkd[1717]: cali802a18ecd91: Link UP Aug 13 00:54:54.037267 systemd-networkd[1717]: cali802a18ecd91: Gained carrier Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.897 [INFO][3833] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.907 [INFO][3833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0 whisker-6cf566484b- calico-system 0f131020-c3bc-40ec-b0c1-a43fefac6f5f 906 0 2025-08-13 00:54:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cf566484b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-a-1859c445b4 whisker-6cf566484b-n65qf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali802a18ecd91 [] [] }} ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Namespace="calico-system" Pod="whisker-6cf566484b-n65qf" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.908 [INFO][3833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Namespace="calico-system" Pod="whisker-6cf566484b-n65qf" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.940 [INFO][3846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" HandleID="k8s-pod-network.fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.940 [INFO][3846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" HandleID="k8s-pod-network.fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-a-1859c445b4", "pod":"whisker-6cf566484b-n65qf", "timestamp":"2025-08-13 00:54:53.940033974 +0000 UTC"}, Hostname:"ci-3510.3.8-a-1859c445b4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.940 [INFO][3846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.940 [INFO][3846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.940 [INFO][3846] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-1859c445b4' Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.946 [INFO][3846] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.953 [INFO][3846] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.957 [INFO][3846] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.958 [INFO][3846] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.960 [INFO][3846] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.960 [INFO][3846] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.962 [INFO][3846] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7 Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.967 [INFO][3846] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.982 [INFO][3846] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.65/26] block=192.168.84.64/26 handle="k8s-pod-network.fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.982 [INFO][3846] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.65/26] handle="k8s-pod-network.fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.982 [INFO][3846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:54.056406 env[1532]: 2025-08-13 00:54:53.982 [INFO][3846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.65/26] IPv6=[] ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" HandleID="k8s-pod-network.fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" Aug 13 00:54:54.057440 env[1532]: 2025-08-13 00:54:53.984 [INFO][3833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Namespace="calico-system" Pod="whisker-6cf566484b-n65qf" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0", GenerateName:"whisker-6cf566484b-", Namespace:"calico-system", SelfLink:"", UID:"0f131020-c3bc-40ec-b0c1-a43fefac6f5f", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cf566484b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"", Pod:"whisker-6cf566484b-n65qf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.84.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali802a18ecd91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:54.057440 env[1532]: 2025-08-13 00:54:53.984 [INFO][3833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.65/32] ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Namespace="calico-system" Pod="whisker-6cf566484b-n65qf" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" Aug 13 00:54:54.057440 env[1532]: 2025-08-13 00:54:53.984 [INFO][3833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali802a18ecd91 ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Namespace="calico-system" Pod="whisker-6cf566484b-n65qf" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" Aug 13 00:54:54.057440 env[1532]: 2025-08-13 00:54:54.028 [INFO][3833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Namespace="calico-system" Pod="whisker-6cf566484b-n65qf" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" Aug 13 00:54:54.057440 env[1532]: 2025-08-13 00:54:54.034 [INFO][3833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Namespace="calico-system" Pod="whisker-6cf566484b-n65qf" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0", GenerateName:"whisker-6cf566484b-", Namespace:"calico-system", SelfLink:"", UID:"0f131020-c3bc-40ec-b0c1-a43fefac6f5f", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cf566484b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7", Pod:"whisker-6cf566484b-n65qf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.84.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali802a18ecd91", MAC:"66:6d:24:58:9f:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:54.057440 env[1532]: 2025-08-13 00:54:54.053 [INFO][3833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7" Namespace="calico-system" Pod="whisker-6cf566484b-n65qf" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--6cf566484b--n65qf-eth0" Aug 13 00:54:54.090090 env[1532]: time="2025-08-13T00:54:54.090007836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:54.090366 env[1532]: time="2025-08-13T00:54:54.090318739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:54.090506 env[1532]: time="2025-08-13T00:54:54.090479941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:54.097821 env[1532]: time="2025-08-13T00:54:54.091954056Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7 pid=3896 runtime=io.containerd.runc.v2 Aug 13 00:54:54.147811 systemd[1]: run-containerd-runc-k8s.io-fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7-runc.weC37f.mount: Deactivated successfully. Aug 13 00:54:54.172000 audit[3932]: AVC avc: denied { write } for pid=3932 comm="tee" name="fd" dev="proc" ino=30890 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:54:54.186955 kernel: audit: type=1400 audit(1755046494.172:295): avc: denied { write } for pid=3932 comm="tee" name="fd" dev="proc" ino=30890 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:54:54.172000 audit[3932]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf84077cb a2=241 a3=1b6 items=1 ppid=3872 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.208880 kernel: audit: type=1300 audit(1755046494.172:295): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf84077cb a2=241 a3=1b6 items=1 ppid=3872 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.172000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 13 00:54:54.214885 kernel: audit: type=1307 audit(1755046494.172:295): cwd="/etc/service/enabled/confd/log" Aug 13 00:54:54.172000 audit: PATH item=0 name="/dev/fd/63" inode=30871 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:54.228030 kernel: audit: type=1302 audit(1755046494.172:295): item=0 name="/dev/fd/63" inode=30871 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:54.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:54:54.241920 kernel: audit: type=1327 audit(1755046494.172:295): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:54:54.187000 audit[3937]: AVC avc: denied { write } for pid=3937 comm="tee" name="fd" dev="proc" ino=30909 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:54:54.255942 kernel: audit: type=1400 audit(1755046494.187:296): avc: denied { write } for pid=3937 comm="tee" name="fd" dev="proc" ino=30909 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:54:54.187000 audit[3937]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc343967bb a2=241 a3=1b6 items=1 ppid=3875 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.187000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 00:54:54.284254 kernel: audit: type=1300 audit(1755046494.187:296): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc343967bb a2=241 a3=1b6 items=1 ppid=3875 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.284362 kernel: audit: type=1307 audit(1755046494.187:296): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 00:54:54.284388 kernel: audit: type=1302 audit(1755046494.187:296): item=0 name="/dev/fd/63" inode=30880 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:54.187000 audit: PATH item=0 name="/dev/fd/63" inode=30880 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:54.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:54:54.197000 audit[3941]: AVC avc: denied { write } for pid=3941 comm="tee" name="fd" dev="proc" ino=30920 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:54:54.197000 audit[3941]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe56a8f7cc a2=241 a3=1b6 items=1 ppid=3871 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.197000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 13 00:54:54.197000 audit: PATH item=0 name="/dev/fd/63" inode=31877 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:54.309874 kernel: audit: type=1327 audit(1755046494.187:296): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:54:54.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:54:54.199000 audit[3953]: AVC avc: denied { write } for pid=3953 comm="tee" name="fd" dev="proc" ino=30924 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:54:54.199000 audit[3953]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe63ac7cd a2=241 a3=1b6 items=1 ppid=3879 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.199000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 13 00:54:54.199000 audit: PATH item=0 name="/dev/fd/63" inode=30904 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:54.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:54:54.204000 audit[3963]: AVC avc: denied { write } for pid=3963 comm="tee" name="fd" dev="proc" ino=30929 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:54:54.204000 audit[3963]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce781a7cb a2=241 a3=1b6 items=1 ppid=3876 pid=3963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.204000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 13 00:54:54.204000 audit: PATH item=0 name="/dev/fd/63" inode=30913 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:54.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:54:54.206000 audit[3961]: AVC avc: denied { write } for pid=3961 comm="tee" name="fd" dev="proc" ino=30933 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:54:54.206000 audit[3961]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe8924a7bc a2=241 a3=1b6 items=1 ppid=3868 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.206000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 13 00:54:54.206000 audit: PATH item=0 name="/dev/fd/63" inode=30908 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:54.206000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:54:54.215000 audit[3965]: AVC avc: denied { write } for pid=3965 comm="tee" name="fd" dev="proc" ino=30938 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:54:54.215000 audit[3965]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffbde0b7cb a2=241 a3=1b6 items=1 ppid=3867 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.215000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 13 00:54:54.215000 audit: PATH item=0 name="/dev/fd/63" inode=30916 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:54.215000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:54:54.345240 env[1532]: time="2025-08-13T00:54:54.345182381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf566484b-n65qf,Uid:0f131020-c3bc-40ec-b0c1-a43fefac6f5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7\"" Aug 13 00:54:54.347245 env[1532]: time="2025-08-13T00:54:54.347209602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit: BPF prog-id=10 op=LOAD Aug 13 00:54:54.603000 audit[4019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc066fc3b0 a2=98 a3=1fffffffffffffff items=0 ppid=3878 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.603000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:54:54.603000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit: BPF prog-id=11 op=LOAD Aug 13 00:54:54.603000 audit[4019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc066fc290 a2=94 a3=3 items=0 ppid=3878 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.603000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:54:54.603000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { bpf } for pid=4019 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit: BPF prog-id=12 op=LOAD Aug 13 00:54:54.603000 audit[4019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc066fc2d0 a2=94 a3=7ffc066fc4b0 items=0 ppid=3878 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.603000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:54:54.603000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:54:54.603000 audit[4019]: AVC avc: denied { perfmon } for pid=4019 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.603000 audit[4019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc066fc3a0 a2=50 a3=a000000085 items=0 ppid=3878 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.603000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit: BPF prog-id=13 op=LOAD Aug 13 00:54:54.604000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff71c8e3a0 a2=98 a3=3 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.604000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.604000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit: BPF prog-id=14 op=LOAD Aug 13 00:54:54.604000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff71c8e190 a2=94 a3=54428f items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.604000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.604000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.604000 audit: BPF prog-id=15 op=LOAD Aug 13 00:54:54.604000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff71c8e1c0 a2=94 a3=2 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.604000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.605000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit: BPF prog-id=16 op=LOAD Aug 13 00:54:54.732000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff71c8e080 a2=94 a3=1 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.732000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.732000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:54:54.732000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.732000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff71c8e150 a2=50 a3=7fff71c8e230 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.732000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff71c8e090 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff71c8e0c0 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff71c8dfd0 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff71c8e0e0 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff71c8e0c0 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff71c8e0b0 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff71c8e0e0 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff71c8e0c0 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff71c8e0e0 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff71c8e0b0 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff71c8e120 a2=28 a3=0 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff71c8ded0 a2=50 a3=1 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit: BPF prog-id=17 op=LOAD Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff71c8ded0 a2=94 a3=5 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff71c8df80 a2=50 a3=1 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff71c8e0a0 a2=4 a3=38 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.742000 audit[4020]: AVC avc: denied { confidentiality } for pid=4020 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:54:54.742000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff71c8e0f0 a2=94 a3=6 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { confidentiality } for pid=4020 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:54:54.743000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff71c8d8a0 a2=94 a3=88 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.743000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.743000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff71c8d8a0 a2=94 a3=88 items=0 ppid=3878 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.743000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit: BPF prog-id=18 op=LOAD Aug 13 00:54:54.753000 audit[4023]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc24964f70 a2=98 a3=1999999999999999 items=0 ppid=3878 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.753000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:54:54.753000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit: BPF prog-id=19 op=LOAD Aug 13 00:54:54.753000 audit[4023]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc24964e50 a2=94 a3=ffff items=0 ppid=3878 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.753000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:54:54.753000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:54.753000 audit: BPF prog-id=20 op=LOAD Aug 13 00:54:54.753000 audit[4023]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc24964e90 a2=94 a3=7ffc24965070 items=0 ppid=3878 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:54.753000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:54:54.753000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:54:54.990804 systemd-networkd[1717]: vxlan.calico: Link UP Aug 13 00:54:54.990816 systemd-networkd[1717]: vxlan.calico: Gained carrier Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit: BPF prog-id=21 op=LOAD Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdea801130 a2=98 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit: BPF prog-id=22 op=LOAD Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdea800f40 a2=94 a3=54428f items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit: BPF prog-id=23 op=LOAD Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdea800f70 a2=94 a3=2 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdea800e40 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdea800e70 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdea800d80 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdea800e90 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdea800e70 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdea800e60 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdea800e90 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdea800e70 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdea800e90 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.013000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.013000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdea800e60 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdea800ed0 a2=28 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit: BPF prog-id=24 op=LOAD Aug 13 00:54:55.014000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdea800d40 a2=94 a3=0 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.014000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:54:55.014000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.014000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffdea800d30 a2=50 a3=2800 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffdea800d30 a2=50 a3=2800 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.015000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit: BPF prog-id=25 op=LOAD Aug 13 00:54:55.015000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdea800550 a2=94 a3=2 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.015000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.015000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { perfmon } for pid=4048 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit[4048]: AVC avc: denied { bpf } for pid=4048 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.015000 audit: BPF prog-id=26 op=LOAD Aug 13 00:54:55.015000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdea800650 a2=94 a3=30 items=0 ppid=3878 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.015000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit: BPF prog-id=27 op=LOAD Aug 13 00:54:55.018000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea4608bc0 a2=98 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.018000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.018000 audit: BPF prog-id=27 op=UNLOAD Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.018000 audit: BPF prog-id=28 op=LOAD Aug 13 00:54:55.018000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea46089b0 a2=94 a3=54428f items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.018000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.019000 audit: BPF prog-id=28 op=UNLOAD Aug 13 00:54:55.019000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.019000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.019000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.019000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.019000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.019000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.019000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.019000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.019000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.019000 audit: BPF prog-id=29 op=LOAD Aug 13 00:54:55.019000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea46089e0 a2=94 a3=2 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.019000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.019000 audit: BPF prog-id=29 op=UNLOAD Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit: BPF prog-id=30 op=LOAD Aug 13 00:54:55.156000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea46088a0 a2=94 a3=1 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.156000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.156000 audit: BPF prog-id=30 op=UNLOAD Aug 13 00:54:55.156000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.156000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffea4608970 a2=50 a3=7ffea4608a50 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.156000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea46088b0 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea46088e0 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea46087f0 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea4608900 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea46088e0 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea46088d0 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea4608900 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea46088e0 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea4608900 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea46088d0 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea4608940 a2=28 a3=0 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffea46086f0 a2=50 a3=1 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.166000 audit: BPF prog-id=31 op=LOAD Aug 13 00:54:55.166000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffea46086f0 a2=94 a3=5 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.167000 audit: BPF prog-id=31 op=UNLOAD Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffea46087a0 a2=50 a3=1 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.167000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffea46088c0 a2=4 a3=38 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.167000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { confidentiality } for pid=4051 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:54:55.167000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffea4608910 a2=94 a3=6 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.167000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { confidentiality } for pid=4051 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:54:55.167000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffea46080c0 a2=94 a3=88 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.167000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { perfmon } for pid=4051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.167000 audit[4051]: AVC avc: denied { confidentiality } for pid=4051 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:54:55.167000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffea46080c0 a2=94 a3=88 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.167000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.168000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.168000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffea4609af0 a2=10 a3=f8f00800 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.168000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.168000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.168000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffea4609990 a2=10 a3=3 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.168000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.168000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.168000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffea4609930 a2=10 a3=3 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.168000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.168000 audit[4051]: AVC avc: denied { bpf } for pid=4051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:55.168000 audit[4051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffea4609930 a2=10 a3=7 items=0 ppid=3878 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.168000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:54:55.173000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:54:55.200930 kubelet[2605]: I0813 00:54:55.200887 2605 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54aba56-1778-47fd-a5c7-dc83617feb7d" path="/var/lib/kubelet/pods/e54aba56-1778-47fd-a5c7-dc83617feb7d/volumes" Aug 13 00:54:55.372000 audit[4079]: NETFILTER_CFG table=mangle:104 family=2 entries=16 op=nft_register_chain pid=4079 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:54:55.372000 audit[4079]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffab446260 a2=0 a3=7fffab44624c items=0 ppid=3878 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.372000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:54:55.420000 audit[4080]: NETFILTER_CFG table=nat:105 family=2 entries=15 op=nft_register_chain pid=4080 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:54:55.420000 audit[4080]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdee199240 a2=0 a3=7ffdee19922c items=0 ppid=3878 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.420000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:54:55.447000 audit[4078]: NETFILTER_CFG table=raw:106 family=2 entries=21 op=nft_register_chain pid=4078 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:54:55.447000 audit[4078]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc55afd6a0 a2=0 a3=7ffc55afd68c items=0 ppid=3878 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.447000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:54:55.463000 audit[4081]: NETFILTER_CFG table=filter:107 family=2 entries=94 op=nft_register_chain pid=4081 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:54:55.463000 audit[4081]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffdc4509540 a2=0 a3=7ffdc450952c items=0 ppid=3878 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:55.463000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:54:55.541096 systemd-networkd[1717]: cali802a18ecd91: Gained IPv6LL Aug 13 00:54:55.651673 env[1532]: time="2025-08-13T00:54:55.651560240Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.657942 env[1532]: time="2025-08-13T00:54:55.657904905Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.662077 env[1532]: time="2025-08-13T00:54:55.662042947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.665940 env[1532]: time="2025-08-13T00:54:55.665908786Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.666444 env[1532]: time="2025-08-13T00:54:55.666411892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:54:55.668696 env[1532]: time="2025-08-13T00:54:55.668664515Z" level=info msg="CreateContainer within sandbox \"fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:54:55.705460 env[1532]: time="2025-08-13T00:54:55.705409491Z" level=info msg="CreateContainer within sandbox \"fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e96110aa8ff13fa131b2bf4f6e19c3385e2846db1126f1843c146189945b7bc9\"" Aug 13 00:54:55.706652 env[1532]: time="2025-08-13T00:54:55.706048197Z" level=info msg="StartContainer for \"e96110aa8ff13fa131b2bf4f6e19c3385e2846db1126f1843c146189945b7bc9\"" Aug 13 00:54:55.737578 systemd[1]: run-containerd-runc-k8s.io-e96110aa8ff13fa131b2bf4f6e19c3385e2846db1126f1843c146189945b7bc9-runc.RMyxxo.mount: Deactivated successfully. Aug 13 00:54:55.788526 env[1532]: time="2025-08-13T00:54:55.787046627Z" level=info msg="StartContainer for \"e96110aa8ff13fa131b2bf4f6e19c3385e2846db1126f1843c146189945b7bc9\" returns successfully" Aug 13 00:54:55.789799 env[1532]: time="2025-08-13T00:54:55.789761354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:54:56.198671 env[1532]: time="2025-08-13T00:54:56.197951409Z" level=info msg="StopPodSandbox for \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\"" Aug 13 00:54:56.198671 env[1532]: time="2025-08-13T00:54:56.198587416Z" level=info msg="StopPodSandbox for \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\"" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.267 [INFO][4151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.268 [INFO][4151] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" iface="eth0" netns="/var/run/netns/cni-f2de3778-0bd5-190f-79bb-964a6db83c8d" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.268 [INFO][4151] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" iface="eth0" netns="/var/run/netns/cni-f2de3778-0bd5-190f-79bb-964a6db83c8d" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.274 [INFO][4151] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" iface="eth0" netns="/var/run/netns/cni-f2de3778-0bd5-190f-79bb-964a6db83c8d" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.274 [INFO][4151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.274 [INFO][4151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.313 [INFO][4168] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" HandleID="k8s-pod-network.fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.313 [INFO][4168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.313 [INFO][4168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.321 [WARNING][4168] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" HandleID="k8s-pod-network.fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.321 [INFO][4168] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" HandleID="k8s-pod-network.fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.323 [INFO][4168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:56.326480 env[1532]: 2025-08-13 00:54:56.324 [INFO][4151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:56.327496 env[1532]: time="2025-08-13T00:54:56.327451119Z" level=info msg="TearDown network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\" successfully" Aug 13 00:54:56.327745 env[1532]: time="2025-08-13T00:54:56.327713322Z" level=info msg="StopPodSandbox for \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\" returns successfully" Aug 13 00:54:56.328818 env[1532]: time="2025-08-13T00:54:56.328785333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7987d7d768-cg7mk,Uid:0d0c4b64-0bfe-4382-b882-b2136806044c,Namespace:calico-system,Attempt:1,}" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.284 [INFO][4159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.284 [INFO][4159] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" iface="eth0" netns="/var/run/netns/cni-8f978fac-4843-da5a-68e1-3a16c8056676" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.285 [INFO][4159] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" iface="eth0" netns="/var/run/netns/cni-8f978fac-4843-da5a-68e1-3a16c8056676" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.285 [INFO][4159] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" iface="eth0" netns="/var/run/netns/cni-8f978fac-4843-da5a-68e1-3a16c8056676" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.285 [INFO][4159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.285 [INFO][4159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.320 [INFO][4173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" HandleID="k8s-pod-network.66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.320 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.323 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.330 [WARNING][4173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" HandleID="k8s-pod-network.66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.330 [INFO][4173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" HandleID="k8s-pod-network.66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.332 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:56.335210 env[1532]: 2025-08-13 00:54:56.333 [INFO][4159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:54:56.335643 env[1532]: time="2025-08-13T00:54:56.335322099Z" level=info msg="TearDown network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\" successfully" Aug 13 00:54:56.335643 env[1532]: time="2025-08-13T00:54:56.335350799Z" level=info msg="StopPodSandbox for \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\" returns successfully" Aug 13 00:54:56.336227 env[1532]: time="2025-08-13T00:54:56.336199008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-49v42,Uid:a784665e-e2ea-4562-8f6c-bf4d4c9ab351,Namespace:kube-system,Attempt:1,}" Aug 13 00:54:56.547215 systemd-networkd[1717]: califf60b71f983: Link UP Aug 13 00:54:56.556700 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:54:56.556800 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califf60b71f983: link becomes ready Aug 13 00:54:56.557067 systemd-networkd[1717]: califf60b71f983: Gained carrier Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.428 [INFO][4181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0 calico-kube-controllers-7987d7d768- calico-system 0d0c4b64-0bfe-4382-b882-b2136806044c 923 0 2025-08-13 00:54:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7987d7d768 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-a-1859c445b4 calico-kube-controllers-7987d7d768-cg7mk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califf60b71f983 [] [] }} ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Namespace="calico-system" Pod="calico-kube-controllers-7987d7d768-cg7mk" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.428 [INFO][4181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Namespace="calico-system" Pod="calico-kube-controllers-7987d7d768-cg7mk" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.485 [INFO][4207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" HandleID="k8s-pod-network.6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.486 [INFO][4207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" HandleID="k8s-pod-network.6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-a-1859c445b4", "pod":"calico-kube-controllers-7987d7d768-cg7mk", "timestamp":"2025-08-13 00:54:56.485924822 +0000 UTC"}, Hostname:"ci-3510.3.8-a-1859c445b4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.486 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.486 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.486 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-1859c445b4' Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.495 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.504 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.508 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.510 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.512 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.512 [INFO][4207] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.515 [INFO][4207] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138 Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.520 [INFO][4207] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.530 [INFO][4207] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.66/26] block=192.168.84.64/26 handle="k8s-pod-network.6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.530 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.66/26] handle="k8s-pod-network.6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.530 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:56.574450 env[1532]: 2025-08-13 00:54:56.530 [INFO][4207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.66/26] IPv6=[] ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" HandleID="k8s-pod-network.6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.577548 env[1532]: 2025-08-13 00:54:56.535 [INFO][4181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Namespace="calico-system" Pod="calico-kube-controllers-7987d7d768-cg7mk" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0", GenerateName:"calico-kube-controllers-7987d7d768-", Namespace:"calico-system", SelfLink:"", UID:"0d0c4b64-0bfe-4382-b882-b2136806044c", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7987d7d768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"", Pod:"calico-kube-controllers-7987d7d768-cg7mk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf60b71f983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:56.577548 env[1532]: 2025-08-13 00:54:56.535 [INFO][4181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.66/32] ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Namespace="calico-system" Pod="calico-kube-controllers-7987d7d768-cg7mk" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.577548 env[1532]: 2025-08-13 00:54:56.535 [INFO][4181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf60b71f983 ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Namespace="calico-system" Pod="calico-kube-controllers-7987d7d768-cg7mk" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.577548 env[1532]: 2025-08-13 00:54:56.557 [INFO][4181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Namespace="calico-system" Pod="calico-kube-controllers-7987d7d768-cg7mk" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.577548 env[1532]: 2025-08-13 00:54:56.558 [INFO][4181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Namespace="calico-system" Pod="calico-kube-controllers-7987d7d768-cg7mk" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0", GenerateName:"calico-kube-controllers-7987d7d768-", Namespace:"calico-system", SelfLink:"", UID:"0d0c4b64-0bfe-4382-b882-b2136806044c", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7987d7d768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138", Pod:"calico-kube-controllers-7987d7d768-cg7mk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf60b71f983", MAC:"d2:12:07:44:d8:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:56.577548 env[1532]: 2025-08-13 00:54:56.571 [INFO][4181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138" Namespace="calico-system" Pod="calico-kube-controllers-7987d7d768-cg7mk" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:56.584000 audit[4228]: NETFILTER_CFG table=filter:108 family=2 entries=36 op=nft_register_chain pid=4228 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:54:56.584000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffec49c5030 a2=0 a3=7ffec49c501c items=0 ppid=3878 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:56.584000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:54:56.596539 env[1532]: time="2025-08-13T00:54:56.596471140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:56.596539 env[1532]: time="2025-08-13T00:54:56.596506940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:56.596539 env[1532]: time="2025-08-13T00:54:56.596520440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:56.596996 env[1532]: time="2025-08-13T00:54:56.596934045Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138 pid=4237 runtime=io.containerd.runc.v2 Aug 13 00:54:56.659989 systemd-networkd[1717]: calibcff517bfc3: Link UP Aug 13 00:54:56.666460 systemd-networkd[1717]: calibcff517bfc3: Gained carrier Aug 13 00:54:56.666927 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibcff517bfc3: link becomes ready Aug 13 00:54:56.703044 systemd[1]: run-netns-cni\x2df2de3778\x2d0bd5\x2d190f\x2d79bb\x2d964a6db83c8d.mount: Deactivated successfully. Aug 13 00:54:56.703543 systemd[1]: run-netns-cni\x2d8f978fac\x2d4843\x2dda5a\x2d68e1\x2d3a16c8056676.mount: Deactivated successfully. Aug 13 00:54:56.708000 audit[4277]: NETFILTER_CFG table=filter:109 family=2 entries=46 op=nft_register_chain pid=4277 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:54:56.708000 audit[4277]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffc2ae75e50 a2=0 a3=7ffc2ae75e3c items=0 ppid=3878 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:56.708000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.459 [INFO][4193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0 coredns-7c65d6cfc9- kube-system a784665e-e2ea-4562-8f6c-bf4d4c9ab351 924 0 2025-08-13 00:54:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-a-1859c445b4 coredns-7c65d6cfc9-49v42 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibcff517bfc3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-49v42" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.459 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-49v42" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.505 [INFO][4214] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" HandleID="k8s-pod-network.f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.505 [INFO][4214] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" HandleID="k8s-pod-network.f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003254a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-a-1859c445b4", "pod":"coredns-7c65d6cfc9-49v42", "timestamp":"2025-08-13 00:54:56.505207917 +0000 UTC"}, Hostname:"ci-3510.3.8-a-1859c445b4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.505 [INFO][4214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.530 [INFO][4214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.530 [INFO][4214] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-1859c445b4' Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.607 [INFO][4214] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.619 [INFO][4214] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.624 [INFO][4214] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.627 [INFO][4214] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.633 [INFO][4214] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.633 [INFO][4214] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.635 [INFO][4214] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1 Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.641 [INFO][4214] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.652 [INFO][4214] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.67/26] block=192.168.84.64/26 handle="k8s-pod-network.f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.652 [INFO][4214] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.67/26] handle="k8s-pod-network.f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.652 [INFO][4214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:56.713586 env[1532]: 2025-08-13 00:54:56.652 [INFO][4214] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.67/26] IPv6=[] ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" HandleID="k8s-pod-network.f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.714736 env[1532]: 2025-08-13 00:54:56.654 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-49v42" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a784665e-e2ea-4562-8f6c-bf4d4c9ab351", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"", Pod:"coredns-7c65d6cfc9-49v42", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcff517bfc3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:56.714736 env[1532]: 2025-08-13 00:54:56.654 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.67/32] ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-49v42" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.714736 env[1532]: 2025-08-13 00:54:56.654 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibcff517bfc3 ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-49v42" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.714736 env[1532]: 2025-08-13 00:54:56.667 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-49v42" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.714736 env[1532]: 2025-08-13 00:54:56.672 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-49v42" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a784665e-e2ea-4562-8f6c-bf4d4c9ab351", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1", Pod:"coredns-7c65d6cfc9-49v42", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcff517bfc3", MAC:"1e:4c:21:58:8b:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:56.714736 env[1532]: 2025-08-13 00:54:56.710 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-49v42" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:54:56.720442 env[1532]: time="2025-08-13T00:54:56.720380093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7987d7d768-cg7mk,Uid:0d0c4b64-0bfe-4382-b882-b2136806044c,Namespace:calico-system,Attempt:1,} returns sandbox id \"6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138\"" Aug 13 00:54:56.741142 env[1532]: time="2025-08-13T00:54:56.735162243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:56.741142 env[1532]: time="2025-08-13T00:54:56.735218343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:56.741142 env[1532]: time="2025-08-13T00:54:56.735232543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:56.741142 env[1532]: time="2025-08-13T00:54:56.735432845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1 pid=4289 runtime=io.containerd.runc.v2 Aug 13 00:54:56.819105 env[1532]: time="2025-08-13T00:54:56.819053391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-49v42,Uid:a784665e-e2ea-4562-8f6c-bf4d4c9ab351,Namespace:kube-system,Attempt:1,} returns sandbox id \"f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1\"" Aug 13 00:54:56.822075 env[1532]: time="2025-08-13T00:54:56.821586417Z" level=info msg="CreateContainer within sandbox \"f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:54:56.848669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236780253.mount: Deactivated successfully. Aug 13 00:54:56.864580 env[1532]: time="2025-08-13T00:54:56.864536051Z" level=info msg="CreateContainer within sandbox \"f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6de6f20018d8b0dfcfe8276032ea768429ee60695267b54546f7665f6d2a4de\"" Aug 13 00:54:56.866479 env[1532]: time="2025-08-13T00:54:56.865938965Z" level=info msg="StartContainer for \"f6de6f20018d8b0dfcfe8276032ea768429ee60695267b54546f7665f6d2a4de\"" Aug 13 00:54:56.923602 env[1532]: time="2025-08-13T00:54:56.923523348Z" level=info msg="StartContainer for \"f6de6f20018d8b0dfcfe8276032ea768429ee60695267b54546f7665f6d2a4de\" returns successfully" Aug 13 00:54:57.014529 systemd-networkd[1717]: vxlan.calico: Gained IPv6LL Aug 13 00:54:57.200007 env[1532]: time="2025-08-13T00:54:57.199606416Z" level=info msg="StopPodSandbox for \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\"" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.253 [INFO][4368] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.254 [INFO][4368] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" iface="eth0" netns="/var/run/netns/cni-d0b6e46d-23dd-4543-1c59-7f9f3a64421f" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.254 [INFO][4368] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" iface="eth0" netns="/var/run/netns/cni-d0b6e46d-23dd-4543-1c59-7f9f3a64421f" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.255 [INFO][4368] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" iface="eth0" netns="/var/run/netns/cni-d0b6e46d-23dd-4543-1c59-7f9f3a64421f" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.255 [INFO][4368] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.255 [INFO][4368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.278 [INFO][4375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" HandleID="k8s-pod-network.26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.279 [INFO][4375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.279 [INFO][4375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.285 [WARNING][4375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" HandleID="k8s-pod-network.26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.285 [INFO][4375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" HandleID="k8s-pod-network.26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.286 [INFO][4375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:57.289334 env[1532]: 2025-08-13 00:54:57.288 [INFO][4368] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:54:57.289890 env[1532]: time="2025-08-13T00:54:57.289507215Z" level=info msg="TearDown network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\" successfully" Aug 13 00:54:57.289890 env[1532]: time="2025-08-13T00:54:57.289551115Z" level=info msg="StopPodSandbox for \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\" returns successfully" Aug 13 00:54:57.290611 env[1532]: time="2025-08-13T00:54:57.290576625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qf7t2,Uid:5dfe102c-5690-449a-a336-a40d559d5b09,Namespace:calico-system,Attempt:1,}" Aug 13 00:54:57.483000 audit[4394]: NETFILTER_CFG table=filter:110 family=2 entries=20 op=nft_register_rule pid=4394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:57.483000 audit[4394]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce550ce40 a2=0 a3=7ffce550ce2c items=0 ppid=2726 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:57.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:57.488000 audit[4394]: NETFILTER_CFG table=nat:111 family=2 entries=14 op=nft_register_rule pid=4394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:57.488000 audit[4394]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffce550ce40 a2=0 a3=0 items=0 ppid=2726 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:57.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:57.498113 kubelet[2605]: I0813 00:54:57.497653 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-49v42" podStartSLOduration=53.497618395 podStartE2EDuration="53.497618395s" podCreationTimestamp="2025-08-13 00:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:57.472662245 +0000 UTC m=+58.375023224" watchObservedRunningTime="2025-08-13 00:54:57.497618395 +0000 UTC m=+58.399979374" Aug 13 00:54:57.519000 audit[4396]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:57.519000 audit[4396]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb811c920 a2=0 a3=7ffeb811c90c items=0 ppid=2726 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:57.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:57.524000 audit[4396]: NETFILTER_CFG table=nat:113 family=2 entries=35 op=nft_register_chain pid=4396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:57.524000 audit[4396]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffeb811c920 a2=0 a3=7ffeb811c90c items=0 ppid=2726 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:57.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:57.634138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:54:57.634243 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali39d62b69845: link becomes ready Aug 13 00:54:57.631138 systemd-networkd[1717]: cali39d62b69845: Link UP Aug 13 00:54:57.636089 systemd-networkd[1717]: cali39d62b69845: Gained carrier Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.503 [INFO][4381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0 csi-node-driver- calico-system 5dfe102c-5690-449a-a336-a40d559d5b09 938 0 2025-08-13 00:54:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-a-1859c445b4 csi-node-driver-qf7t2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali39d62b69845 [] [] }} ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Namespace="calico-system" Pod="csi-node-driver-qf7t2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.503 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Namespace="calico-system" Pod="csi-node-driver-qf7t2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.563 [INFO][4398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" HandleID="k8s-pod-network.3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.563 [INFO][4398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" HandleID="k8s-pod-network.3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-a-1859c445b4", "pod":"csi-node-driver-qf7t2", "timestamp":"2025-08-13 00:54:57.563128049 +0000 UTC"}, Hostname:"ci-3510.3.8-a-1859c445b4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.563 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.563 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.563 [INFO][4398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-1859c445b4' Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.570 [INFO][4398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.575 [INFO][4398] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.579 [INFO][4398] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.581 [INFO][4398] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.589 [INFO][4398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.589 [INFO][4398] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.591 [INFO][4398] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.597 [INFO][4398] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.608 [INFO][4398] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.68/26] block=192.168.84.64/26 handle="k8s-pod-network.3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.608 [INFO][4398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.68/26] handle="k8s-pod-network.3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.608 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:57.656684 env[1532]: 2025-08-13 00:54:57.608 [INFO][4398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.68/26] IPv6=[] ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" HandleID="k8s-pod-network.3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.657711 env[1532]: 2025-08-13 00:54:57.614 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Namespace="calico-system" Pod="csi-node-driver-qf7t2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5dfe102c-5690-449a-a336-a40d559d5b09", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"", Pod:"csi-node-driver-qf7t2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39d62b69845", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:57.657711 env[1532]: 2025-08-13 00:54:57.614 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.68/32] ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Namespace="calico-system" Pod="csi-node-driver-qf7t2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.657711 env[1532]: 2025-08-13 00:54:57.614 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39d62b69845 ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Namespace="calico-system" Pod="csi-node-driver-qf7t2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.657711 env[1532]: 2025-08-13 00:54:57.636 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Namespace="calico-system" Pod="csi-node-driver-qf7t2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.657711 env[1532]: 2025-08-13 00:54:57.637 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Namespace="calico-system" Pod="csi-node-driver-qf7t2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5dfe102c-5690-449a-a336-a40d559d5b09", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d", Pod:"csi-node-driver-qf7t2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39d62b69845", MAC:"72:cd:64:92:fe:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:57.657711 env[1532]: 2025-08-13 00:54:57.653 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d" Namespace="calico-system" Pod="csi-node-driver-qf7t2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:54:57.686000 audit[4415]: NETFILTER_CFG table=filter:114 family=2 entries=44 op=nft_register_chain pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:54:57.686000 audit[4415]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffd4cad7110 a2=0 a3=7ffd4cad70fc items=0 ppid=3878 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:57.686000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:54:57.703343 systemd[1]: run-netns-cni\x2dd0b6e46d\x2d23dd\x2d4543\x2d1c59\x2d7f9f3a64421f.mount: Deactivated successfully. Aug 13 00:54:57.724214 env[1532]: time="2025-08-13T00:54:57.724151159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:57.724687 env[1532]: time="2025-08-13T00:54:57.724659964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:57.724784 env[1532]: time="2025-08-13T00:54:57.724763065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:57.725034 env[1532]: time="2025-08-13T00:54:57.725005867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d pid=4424 runtime=io.containerd.runc.v2 Aug 13 00:54:57.786096 systemd[1]: run-containerd-runc-k8s.io-3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d-runc.KR1nKo.mount: Deactivated successfully. Aug 13 00:54:57.813250 env[1532]: time="2025-08-13T00:54:57.813206649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qf7t2,Uid:5dfe102c-5690-449a-a336-a40d559d5b09,Namespace:calico-system,Attempt:1,} returns sandbox id \"3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d\"" Aug 13 00:54:57.972974 systemd-networkd[1717]: calibcff517bfc3: Gained IPv6LL Aug 13 00:54:58.165522 systemd-networkd[1717]: califf60b71f983: Gained IPv6LL Aug 13 00:54:58.309701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675504171.mount: Deactivated successfully. Aug 13 00:54:58.370574 env[1532]: time="2025-08-13T00:54:58.370518276Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:58.377261 env[1532]: time="2025-08-13T00:54:58.377218042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:58.380658 env[1532]: time="2025-08-13T00:54:58.380623475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:58.384651 env[1532]: time="2025-08-13T00:54:58.384621315Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:58.385140 env[1532]: time="2025-08-13T00:54:58.385108720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:54:58.388160 env[1532]: time="2025-08-13T00:54:58.387239241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:54:58.388712 env[1532]: time="2025-08-13T00:54:58.388685155Z" level=info msg="CreateContainer within sandbox \"fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:54:58.435843 env[1532]: time="2025-08-13T00:54:58.435243015Z" level=info msg="CreateContainer within sandbox \"fd25073ec7550da2582a7267e188add7cc68048f66b8d8e9957cd6720ece44f7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f556e1e6535b805f2db6770ce3ce78c816830f21ffca8f2f139340d5b3b605b5\"" Aug 13 00:54:58.437574 env[1532]: time="2025-08-13T00:54:58.436394926Z" level=info msg="StartContainer for \"f556e1e6535b805f2db6770ce3ce78c816830f21ffca8f2f139340d5b3b605b5\"" Aug 13 00:54:58.514371 env[1532]: time="2025-08-13T00:54:58.514312696Z" level=info msg="StartContainer for \"f556e1e6535b805f2db6770ce3ce78c816830f21ffca8f2f139340d5b3b605b5\" returns successfully" Aug 13 00:54:58.741047 systemd-networkd[1717]: cali39d62b69845: Gained IPv6LL Aug 13 00:54:59.175382 env[1532]: time="2025-08-13T00:54:59.175339206Z" level=info msg="StopPodSandbox for \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\"" Aug 13 00:54:59.204488 env[1532]: time="2025-08-13T00:54:59.204446090Z" level=info msg="StopPodSandbox for \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\"" Aug 13 00:54:59.206517 env[1532]: time="2025-08-13T00:54:59.205414000Z" level=info msg="StopPodSandbox for \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\"" Aug 13 00:54:59.209790 env[1532]: time="2025-08-13T00:54:59.205612802Z" level=info msg="StopPodSandbox for \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\"" Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.224 [WARNING][4501] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0", GenerateName:"calico-kube-controllers-7987d7d768-", Namespace:"calico-system", SelfLink:"", UID:"0d0c4b64-0bfe-4382-b882-b2136806044c", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7987d7d768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138", Pod:"calico-kube-controllers-7987d7d768-cg7mk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf60b71f983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.225 [INFO][4501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.225 [INFO][4501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" iface="eth0" netns="" Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.225 [INFO][4501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.225 [INFO][4501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.278 [INFO][4536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" HandleID="k8s-pod-network.fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.279 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.279 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.286 [WARNING][4536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" HandleID="k8s-pod-network.fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.287 [INFO][4536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" HandleID="k8s-pod-network.fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.288 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:59.300074 env[1532]: 2025-08-13 00:54:59.290 [INFO][4501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:59.300788 env[1532]: time="2025-08-13T00:54:59.300108324Z" level=info msg="TearDown network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\" successfully" Aug 13 00:54:59.300788 env[1532]: time="2025-08-13T00:54:59.300146525Z" level=info msg="StopPodSandbox for \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\" returns successfully" Aug 13 00:54:59.301223 env[1532]: time="2025-08-13T00:54:59.301188335Z" level=info msg="RemovePodSandbox for \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\"" Aug 13 00:54:59.301426 env[1532]: time="2025-08-13T00:54:59.301368137Z" level=info msg="Forcibly stopping sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\"" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.349 [INFO][4549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.349 [INFO][4549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" iface="eth0" netns="/var/run/netns/cni-8527f235-21fb-bd34-cac7-2349eb6131b2" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.349 [INFO][4549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" iface="eth0" netns="/var/run/netns/cni-8527f235-21fb-bd34-cac7-2349eb6131b2" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.359 [INFO][4549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" iface="eth0" netns="/var/run/netns/cni-8527f235-21fb-bd34-cac7-2349eb6131b2" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.359 [INFO][4549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.359 [INFO][4549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.448 [INFO][4579] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" HandleID="k8s-pod-network.9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.454 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.454 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.469 [WARNING][4579] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" HandleID="k8s-pod-network.9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.469 [INFO][4579] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" HandleID="k8s-pod-network.9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.471 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:59.480234 env[1532]: 2025-08-13 00:54:59.476 [INFO][4549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:54:59.485385 env[1532]: time="2025-08-13T00:54:59.485342033Z" level=info msg="TearDown network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\" successfully" Aug 13 00:54:59.485541 env[1532]: time="2025-08-13T00:54:59.485517135Z" level=info msg="StopPodSandbox for \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\" returns successfully" Aug 13 00:54:59.486247 env[1532]: time="2025-08-13T00:54:59.486216942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pkkr2,Uid:6c80f539-1a56-46fa-b014-bcb6516c078a,Namespace:calico-system,Attempt:1,}" Aug 13 00:54:59.487146 systemd[1]: run-netns-cni\x2d8527f235\x2d21fb\x2dbd34\x2dcac7\x2d2349eb6131b2.mount: Deactivated successfully. Aug 13 00:54:59.500216 kubelet[2605]: I0813 00:54:59.499534 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cf566484b-n65qf" podStartSLOduration=2.459924638 podStartE2EDuration="6.499510772s" podCreationTimestamp="2025-08-13 00:54:53 +0000 UTC" firstStartedPulling="2025-08-13 00:54:54.346671497 +0000 UTC m=+55.249032476" lastFinishedPulling="2025-08-13 00:54:58.386257631 +0000 UTC m=+59.288618610" observedRunningTime="2025-08-13 00:54:59.496456042 +0000 UTC m=+60.398817021" watchObservedRunningTime="2025-08-13 00:54:59.499510772 +0000 UTC m=+60.401871751" Aug 13 00:54:59.570364 kernel: kauditd_printk_skb: 574 callbacks suppressed Aug 13 00:54:59.570494 kernel: audit: type=1325 audit(1755046499.560:411): table=filter:115 family=2 entries=13 op=nft_register_rule pid=4607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:59.560000 audit[4607]: NETFILTER_CFG table=filter:115 family=2 entries=13 op=nft_register_rule pid=4607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:59.581953 kernel: audit: type=1300 audit(1755046499.560:411): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcc3ed77f0 a2=0 a3=7ffcc3ed77dc items=0 ppid=2726 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:59.560000 audit[4607]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcc3ed77f0 a2=0 a3=7ffcc3ed77dc items=0 ppid=2726 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.366 [INFO][4544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.366 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" iface="eth0" netns="/var/run/netns/cni-6daa4352-048f-999e-5cbb-9bedfe629006" Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.377 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" iface="eth0" netns="/var/run/netns/cni-6daa4352-048f-999e-5cbb-9bedfe629006" Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.377 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" iface="eth0" netns="/var/run/netns/cni-6daa4352-048f-999e-5cbb-9bedfe629006" Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.377 [INFO][4544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.377 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.493 [INFO][4585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" HandleID="k8s-pod-network.dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.494 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.494 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.517 [WARNING][4585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" HandleID="k8s-pod-network.dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.517 [INFO][4585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" HandleID="k8s-pod-network.dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.551 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:59.582448 env[1532]: 2025-08-13 00:54:59.570 [INFO][4544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.372 [INFO][4545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.372 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" iface="eth0" netns="/var/run/netns/cni-2c0bb49e-7d1e-6005-6483-88593225e552" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.372 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" iface="eth0" netns="/var/run/netns/cni-2c0bb49e-7d1e-6005-6483-88593225e552" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.373 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" iface="eth0" netns="/var/run/netns/cni-2c0bb49e-7d1e-6005-6483-88593225e552" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.373 [INFO][4545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.373 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.549 [INFO][4586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" HandleID="k8s-pod-network.46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.549 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.562 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.571 [WARNING][4586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" HandleID="k8s-pod-network.46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.571 [INFO][4586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" HandleID="k8s-pod-network.46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.572 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:59.583986 env[1532]: 2025-08-13 00:54:59.580 [INFO][4545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:54:59.609105 env[1532]: time="2025-08-13T00:54:59.609049241Z" level=info msg="TearDown network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\" successfully" Aug 13 00:54:59.609273 env[1532]: time="2025-08-13T00:54:59.609251543Z" level=info msg="StopPodSandbox for \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\" returns successfully" Aug 13 00:54:59.609469 env[1532]: time="2025-08-13T00:54:59.609444045Z" level=info msg="TearDown network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\" successfully" Aug 13 00:54:59.609577 env[1532]: time="2025-08-13T00:54:59.609558946Z" level=info msg="StopPodSandbox for \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\" returns successfully" Aug 13 00:54:59.610061 systemd[1]: run-netns-cni\x2d6daa4352\x2d048f\x2d999e\x2d5cbb\x2d9bedfe629006.mount: Deactivated successfully. Aug 13 00:54:59.610290 systemd[1]: run-netns-cni\x2d2c0bb49e\x2d7d1e\x2d6005\x2d6483\x2d88593225e552.mount: Deactivated successfully. Aug 13 00:54:59.560000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:59.625879 kernel: audit: type=1327 audit(1755046499.560:411): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:59.579000 audit[4607]: NETFILTER_CFG table=nat:116 family=2 entries=27 op=nft_register_chain pid=4607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:59.632029 env[1532]: time="2025-08-13T00:54:59.629406940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8nhmz,Uid:30e30d82-15c7-47b1-9012-021e8bd25177,Namespace:kube-system,Attempt:1,}" Aug 13 00:54:59.632029 env[1532]: time="2025-08-13T00:54:59.629936445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589fbcc97d-sqb5w,Uid:27000197-aa78-41c4-95cb-9d77cedc6876,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:54:59.658115 kernel: audit: type=1325 audit(1755046499.579:412): table=nat:116 family=2 entries=27 op=nft_register_chain pid=4607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:59.658223 kernel: audit: type=1300 audit(1755046499.579:412): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffcc3ed77f0 a2=0 a3=7ffcc3ed77dc items=0 ppid=2726 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:59.579000 audit[4607]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffcc3ed77f0 a2=0 a3=7ffcc3ed77dc items=0 ppid=2726 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:59.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:59.668871 kernel: audit: type=1327 audit(1755046499.579:412): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.438 [WARNING][4573] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0", GenerateName:"calico-kube-controllers-7987d7d768-", Namespace:"calico-system", SelfLink:"", UID:"0d0c4b64-0bfe-4382-b882-b2136806044c", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7987d7d768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138", Pod:"calico-kube-controllers-7987d7d768-cg7mk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf60b71f983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.438 [INFO][4573] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.438 [INFO][4573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" iface="eth0" netns="" Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.438 [INFO][4573] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.439 [INFO][4573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.631 [INFO][4597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" HandleID="k8s-pod-network.fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.632 [INFO][4597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.632 [INFO][4597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.676 [WARNING][4597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" HandleID="k8s-pod-network.fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.676 [INFO][4597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" HandleID="k8s-pod-network.fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--kube--controllers--7987d7d768--cg7mk-eth0" Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.678 [INFO][4597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:54:59.683203 env[1532]: 2025-08-13 00:54:59.680 [INFO][4573] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831" Aug 13 00:54:59.684949 env[1532]: time="2025-08-13T00:54:59.684909582Z" level=info msg="TearDown network for sandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\" successfully" Aug 13 00:54:59.706990 env[1532]: time="2025-08-13T00:54:59.705508683Z" level=info msg="RemovePodSandbox \"fcce102baf165ffaec411779e67a3bb512f76370d95e8e77db3ae542bc138831\" returns successfully" Aug 13 00:54:59.707876 env[1532]: time="2025-08-13T00:54:59.707821606Z" level=info msg="StopPodSandbox for \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\"" Aug 13 00:55:00.094519 systemd-networkd[1717]: calie596f395d92: Link UP Aug 13 00:55:00.105883 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:55:00.105986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie596f395d92: link becomes ready Aug 13 00:55:00.113664 systemd-networkd[1717]: calie596f395d92: Gained carrier Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:54:59.778 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0 goldmane-58fd7646b9- calico-system 6c80f539-1a56-46fa-b014-bcb6516c078a 962 0 2025-08-13 00:54:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-a-1859c445b4 goldmane-58fd7646b9-pkkr2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie596f395d92 [] [] }} ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Namespace="calico-system" Pod="goldmane-58fd7646b9-pkkr2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:54:59.778 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Namespace="calico-system" Pod="goldmane-58fd7646b9-pkkr2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.025 [INFO][4642] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" HandleID="k8s-pod-network.487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.026 [INFO][4642] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" HandleID="k8s-pod-network.487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037e780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-a-1859c445b4", "pod":"goldmane-58fd7646b9-pkkr2", "timestamp":"2025-08-13 00:55:00.024356395 +0000 UTC"}, Hostname:"ci-3510.3.8-a-1859c445b4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.026 [INFO][4642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.026 [INFO][4642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.026 [INFO][4642] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-1859c445b4' Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.035 [INFO][4642] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.041 [INFO][4642] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.046 [INFO][4642] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.048 [INFO][4642] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.050 [INFO][4642] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.051 [INFO][4642] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.052 [INFO][4642] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487 Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.058 [INFO][4642] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.071 [INFO][4642] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.69/26] block=192.168.84.64/26 handle="k8s-pod-network.487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.071 [INFO][4642] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.69/26] handle="k8s-pod-network.487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.071 [INFO][4642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:00.146096 env[1532]: 2025-08-13 00:55:00.071 [INFO][4642] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.69/26] IPv6=[] ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" HandleID="k8s-pod-network.487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:55:00.147150 env[1532]: 2025-08-13 00:55:00.078 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Namespace="calico-system" Pod="goldmane-58fd7646b9-pkkr2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6c80f539-1a56-46fa-b014-bcb6516c078a", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"", Pod:"goldmane-58fd7646b9-pkkr2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie596f395d92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:00.147150 env[1532]: 2025-08-13 00:55:00.078 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.69/32] ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Namespace="calico-system" Pod="goldmane-58fd7646b9-pkkr2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:55:00.147150 env[1532]: 2025-08-13 00:55:00.079 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie596f395d92 ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Namespace="calico-system" Pod="goldmane-58fd7646b9-pkkr2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:55:00.147150 env[1532]: 2025-08-13 00:55:00.118 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Namespace="calico-system" Pod="goldmane-58fd7646b9-pkkr2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:55:00.147150 env[1532]: 2025-08-13 00:55:00.118 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Namespace="calico-system" Pod="goldmane-58fd7646b9-pkkr2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6c80f539-1a56-46fa-b014-bcb6516c078a", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487", Pod:"goldmane-58fd7646b9-pkkr2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie596f395d92", MAC:"86:af:74:98:96:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:00.147150 env[1532]: 2025-08-13 00:55:00.138 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487" Namespace="calico-system" Pod="goldmane-58fd7646b9-pkkr2" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:55:00.190307 systemd-networkd[1717]: cali703ae353144: Link UP Aug 13 00:55:00.197278 systemd-networkd[1717]: cali703ae353144: Gained carrier Aug 13 00:55:00.198083 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali703ae353144: link becomes ready Aug 13 00:55:00.212532 env[1532]: time="2025-08-13T00:55:00.210335990Z" level=info msg="StopPodSandbox for \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\"" Aug 13 00:55:00.212000 audit[4696]: NETFILTER_CFG table=filter:117 family=2 entries=56 op=nft_register_chain pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.002 [WARNING][4630] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a784665e-e2ea-4562-8f6c-bf4d4c9ab351", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1", Pod:"coredns-7c65d6cfc9-49v42", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcff517bfc3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.005 [INFO][4630] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.005 [INFO][4630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" iface="eth0" netns="" Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.005 [INFO][4630] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.005 [INFO][4630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.166 [INFO][4672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" HandleID="k8s-pod-network.66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.167 [INFO][4672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.175 [INFO][4672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.222 [WARNING][4672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" HandleID="k8s-pod-network.66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.222 [INFO][4672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" HandleID="k8s-pod-network.66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.223 [INFO][4672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:00.228322 env[1532]: 2025-08-13 00:55:00.225 [INFO][4630] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:55:00.231648 kernel: audit: type=1325 audit(1755046500.212:413): table=filter:117 family=2 entries=56 op=nft_register_chain pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:00.258643 kernel: audit: type=1300 audit(1755046500.212:413): arch=c000003e syscall=46 success=yes exit=28744 a0=3 a1=7ffd85846000 a2=0 a3=7ffd85845fec items=0 ppid=3878 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:00.212000 audit[4696]: SYSCALL arch=c000003e syscall=46 success=yes exit=28744 a0=3 a1=7ffd85846000 a2=0 a3=7ffd85845fec items=0 ppid=3878 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:00.258898 env[1532]: time="2025-08-13T00:55:00.242034497Z" level=info msg="TearDown network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\" successfully" Aug 13 00:55:00.258898 env[1532]: time="2025-08-13T00:55:00.242065797Z" level=info msg="StopPodSandbox for \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\" returns successfully" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:54:59.942 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0 calico-apiserver-589fbcc97d- calico-apiserver 27000197-aa78-41c4-95cb-9d77cedc6876 963 0 2025-08-13 00:54:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:589fbcc97d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-a-1859c445b4 calico-apiserver-589fbcc97d-sqb5w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali703ae353144 [] [] }} ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-sqb5w" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:54:59.942 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-sqb5w" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.064 [INFO][4663] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" HandleID="k8s-pod-network.2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.064 [INFO][4663] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" HandleID="k8s-pod-network.2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-a-1859c445b4", "pod":"calico-apiserver-589fbcc97d-sqb5w", "timestamp":"2025-08-13 00:55:00.064128579 +0000 UTC"}, Hostname:"ci-3510.3.8-a-1859c445b4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.064 [INFO][4663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.073 [INFO][4663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.073 [INFO][4663] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-1859c445b4' Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.137 [INFO][4663] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.143 [INFO][4663] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.147 [INFO][4663] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.150 [INFO][4663] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.152 [INFO][4663] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.152 [INFO][4663] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.154 [INFO][4663] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.160 [INFO][4663] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.174 [INFO][4663] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.70/26] block=192.168.84.64/26 handle="k8s-pod-network.2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.174 [INFO][4663] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.70/26] handle="k8s-pod-network.2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.174 [INFO][4663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:00.258898 env[1532]: 2025-08-13 00:55:00.174 [INFO][4663] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.70/26] IPv6=[] ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" HandleID="k8s-pod-network.2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:55:00.260845 env[1532]: 2025-08-13 00:55:00.177 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-sqb5w" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0", GenerateName:"calico-apiserver-589fbcc97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"27000197-aa78-41c4-95cb-9d77cedc6876", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589fbcc97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"", Pod:"calico-apiserver-589fbcc97d-sqb5w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali703ae353144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:00.260845 env[1532]: 2025-08-13 00:55:00.177 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.70/32] ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-sqb5w" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:55:00.260845 env[1532]: 2025-08-13 00:55:00.178 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali703ae353144 ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-sqb5w" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:55:00.260845 env[1532]: 2025-08-13 00:55:00.202 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-sqb5w" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:55:00.260845 env[1532]: 2025-08-13 00:55:00.203 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-sqb5w" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0", GenerateName:"calico-apiserver-589fbcc97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"27000197-aa78-41c4-95cb-9d77cedc6876", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589fbcc97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb", Pod:"calico-apiserver-589fbcc97d-sqb5w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali703ae353144", MAC:"3a:0e:55:b2:ef:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:00.260845 env[1532]: 2025-08-13 00:55:00.240 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-sqb5w" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:55:00.212000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:00.259000 audit[4708]: NETFILTER_CFG table=filter:118 family=2 entries=66 op=nft_register_chain pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:00.283293 kernel: audit: type=1327 audit(1755046500.212:413): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:00.283369 kernel: audit: type=1325 audit(1755046500.259:414): table=filter:118 family=2 entries=66 op=nft_register_chain pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:00.259000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=32960 a0=3 a1=7ffd716115f0 a2=0 a3=7ffd716115dc items=0 ppid=3878 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:00.288606 env[1532]: time="2025-08-13T00:55:00.288545346Z" level=info msg="RemovePodSandbox for \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\"" Aug 13 00:55:00.288873 env[1532]: time="2025-08-13T00:55:00.288752748Z" level=info msg="Forcibly stopping sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\"" Aug 13 00:55:00.259000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:00.338742 env[1532]: time="2025-08-13T00:55:00.321772167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:00.338742 env[1532]: time="2025-08-13T00:55:00.321820067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:00.338742 env[1532]: time="2025-08-13T00:55:00.322640975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:00.338742 env[1532]: time="2025-08-13T00:55:00.323199480Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487 pid=4727 runtime=io.containerd.runc.v2 Aug 13 00:55:00.375616 systemd-networkd[1717]: cali2804437750b: Link UP Aug 13 00:55:00.405932 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2804437750b: link becomes ready Aug 13 00:55:00.405533 systemd-networkd[1717]: cali2804437750b: Gained carrier Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.022 [INFO][4645] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0 coredns-7c65d6cfc9- kube-system 30e30d82-15c7-47b1-9012-021e8bd25177 964 0 2025-08-13 00:54:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-a-1859c445b4 coredns-7c65d6cfc9-8nhmz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2804437750b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8nhmz" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.022 [INFO][4645] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8nhmz" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.247 [INFO][4679] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" HandleID="k8s-pod-network.e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.248 [INFO][4679] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" HandleID="k8s-pod-network.e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-a-1859c445b4", "pod":"coredns-7c65d6cfc9-8nhmz", "timestamp":"2025-08-13 00:55:00.247951554 +0000 UTC"}, Hostname:"ci-3510.3.8-a-1859c445b4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.249 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.249 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.249 [INFO][4679] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-1859c445b4' Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.262 [INFO][4679] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.285 [INFO][4679] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.290 [INFO][4679] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.292 [INFO][4679] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.308 [INFO][4679] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.308 [INFO][4679] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.322 [INFO][4679] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.350 [INFO][4679] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.370 [INFO][4679] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.71/26] block=192.168.84.64/26 handle="k8s-pod-network.e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.370 [INFO][4679] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.71/26] handle="k8s-pod-network.e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.370 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:00.427616 env[1532]: 2025-08-13 00:55:00.370 [INFO][4679] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.71/26] IPv6=[] ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" HandleID="k8s-pod-network.e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:55:00.428615 env[1532]: 2025-08-13 00:55:00.372 [INFO][4645] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8nhmz" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"30e30d82-15c7-47b1-9012-021e8bd25177", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"", Pod:"coredns-7c65d6cfc9-8nhmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2804437750b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:00.428615 env[1532]: 2025-08-13 00:55:00.372 [INFO][4645] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.71/32] ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8nhmz" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:55:00.428615 env[1532]: 2025-08-13 00:55:00.372 [INFO][4645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2804437750b ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8nhmz" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:55:00.428615 env[1532]: 2025-08-13 00:55:00.406 [INFO][4645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8nhmz" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:55:00.428615 env[1532]: 2025-08-13 00:55:00.407 [INFO][4645] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8nhmz" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"30e30d82-15c7-47b1-9012-021e8bd25177", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e", Pod:"coredns-7c65d6cfc9-8nhmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2804437750b", MAC:"22:01:fd:5a:0d:2b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:00.428615 env[1532]: 2025-08-13 00:55:00.422 [INFO][4645] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8nhmz" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:55:00.454814 env[1532]: time="2025-08-13T00:55:00.437972389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:00.454814 env[1532]: time="2025-08-13T00:55:00.438021589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:00.454814 env[1532]: time="2025-08-13T00:55:00.438038589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:00.454814 env[1532]: time="2025-08-13T00:55:00.439641305Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb pid=4764 runtime=io.containerd.runc.v2 Aug 13 00:55:00.571841 env[1532]: time="2025-08-13T00:55:00.571770981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:00.572692 env[1532]: time="2025-08-13T00:55:00.572623289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:00.572887 env[1532]: time="2025-08-13T00:55:00.572838591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:00.573167 env[1532]: time="2025-08-13T00:55:00.573105794Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e pid=4813 runtime=io.containerd.runc.v2 Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.538 [INFO][4740] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.538 [INFO][4740] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" iface="eth0" netns="/var/run/netns/cni-23872fe5-5bbd-fbc9-eb0b-07d8092956f5" Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.539 [INFO][4740] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" iface="eth0" netns="/var/run/netns/cni-23872fe5-5bbd-fbc9-eb0b-07d8092956f5" Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.539 [INFO][4740] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" iface="eth0" netns="/var/run/netns/cni-23872fe5-5bbd-fbc9-eb0b-07d8092956f5" Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.540 [INFO][4740] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.540 [INFO][4740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.704 [INFO][4821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" HandleID="k8s-pod-network.eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.706 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.706 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.721 [WARNING][4821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" HandleID="k8s-pod-network.eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.721 [INFO][4821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" HandleID="k8s-pod-network.eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.723 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:00.763070 env[1532]: 2025-08-13 00:55:00.761 [INFO][4740] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:55:00.768086 systemd[1]: run-netns-cni\x2d23872fe5\x2d5bbd\x2dfbc9\x2deb0b\x2d07d8092956f5.mount: Deactivated successfully. Aug 13 00:55:00.770286 env[1532]: time="2025-08-13T00:55:00.770243797Z" level=info msg="TearDown network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\" successfully" Aug 13 00:55:00.770427 env[1532]: time="2025-08-13T00:55:00.770403199Z" level=info msg="StopPodSandbox for \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\" returns successfully" Aug 13 00:55:00.771354 env[1532]: time="2025-08-13T00:55:00.771322608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589fbcc97d-2zc84,Uid:2c90ce12-f5c9-423c-8e01-a32bac086304,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:55:00.792000 audit[4881]: NETFILTER_CFG table=filter:119 family=2 entries=52 op=nft_register_chain pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:00.792000 audit[4881]: SYSCALL arch=c000003e syscall=46 success=yes exit=23908 a0=3 a1=7fff499324a0 a2=0 a3=7fff4993248c items=0 ppid=3878 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:00.792000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:00.795422 env[1532]: time="2025-08-13T00:55:00.795382240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8nhmz,Uid:30e30d82-15c7-47b1-9012-021e8bd25177,Namespace:kube-system,Attempt:1,} returns sandbox id \"e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e\"" Aug 13 00:55:00.849904 env[1532]: time="2025-08-13T00:55:00.847597044Z" level=info msg="CreateContainer within sandbox \"e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:55:00.849904 env[1532]: time="2025-08-13T00:55:00.847816446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pkkr2,Uid:6c80f539-1a56-46fa-b014-bcb6516c078a,Namespace:calico-system,Attempt:1,} returns sandbox id \"487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487\"" Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.600 [WARNING][4744] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a784665e-e2ea-4562-8f6c-bf4d4c9ab351", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"f52718bfcd844cd1a55240117343296b0967d4259fa886025bd89378dae29dd1", Pod:"coredns-7c65d6cfc9-49v42", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcff517bfc3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.600 [INFO][4744] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.600 [INFO][4744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" iface="eth0" netns="" Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.601 [INFO][4744] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.601 [INFO][4744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.831 [INFO][4841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" HandleID="k8s-pod-network.66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.831 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.834 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.849 [WARNING][4841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" HandleID="k8s-pod-network.66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.849 [INFO][4841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" HandleID="k8s-pod-network.66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--49v42-eth0" Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.851 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:00.861947 env[1532]: 2025-08-13 00:55:00.858 [INFO][4744] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e" Aug 13 00:55:00.862766 env[1532]: time="2025-08-13T00:55:00.862734590Z" level=info msg="TearDown network for sandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\" successfully" Aug 13 00:55:00.914576 env[1532]: time="2025-08-13T00:55:00.914523690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589fbcc97d-sqb5w,Uid:27000197-aa78-41c4-95cb-9d77cedc6876,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb\"" Aug 13 00:55:01.001880 systemd-networkd[1717]: cali942a3184d33: Link UP Aug 13 00:55:01.007082 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali942a3184d33: link becomes ready Aug 13 00:55:01.007146 systemd-networkd[1717]: cali942a3184d33: Gained carrier Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.928 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0 calico-apiserver-589fbcc97d- calico-apiserver 2c90ce12-f5c9-423c-8e01-a32bac086304 986 0 2025-08-13 00:54:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:589fbcc97d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-a-1859c445b4 calico-apiserver-589fbcc97d-2zc84 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali942a3184d33 [] [] }} ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-2zc84" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.928 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-2zc84" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.953 [INFO][4902] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" HandleID="k8s-pod-network.5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.953 [INFO][4902] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" HandleID="k8s-pod-network.5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000250ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-a-1859c445b4", "pod":"calico-apiserver-589fbcc97d-2zc84", "timestamp":"2025-08-13 00:55:00.953149163 +0000 UTC"}, Hostname:"ci-3510.3.8-a-1859c445b4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.953 [INFO][4902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.953 [INFO][4902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.953 [INFO][4902] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-1859c445b4' Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.960 [INFO][4902] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.964 [INFO][4902] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.975 [INFO][4902] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.977 [INFO][4902] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.979 [INFO][4902] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.979 [INFO][4902] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.980 [INFO][4902] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.984 [INFO][4902] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.995 [INFO][4902] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.72/26] block=192.168.84.64/26 handle="k8s-pod-network.5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.995 [INFO][4902] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.72/26] handle="k8s-pod-network.5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" host="ci-3510.3.8-a-1859c445b4" Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.995 [INFO][4902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:01.028488 env[1532]: 2025-08-13 00:55:00.995 [INFO][4902] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.72/26] IPv6=[] ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" HandleID="k8s-pod-network.5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:01.029504 env[1532]: 2025-08-13 00:55:00.997 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-2zc84" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0", GenerateName:"calico-apiserver-589fbcc97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c90ce12-f5c9-423c-8e01-a32bac086304", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589fbcc97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"", Pod:"calico-apiserver-589fbcc97d-2zc84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali942a3184d33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:01.029504 env[1532]: 2025-08-13 00:55:00.997 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.72/32] ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-2zc84" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:01.029504 env[1532]: 2025-08-13 00:55:00.997 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali942a3184d33 ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-2zc84" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:01.029504 env[1532]: 2025-08-13 00:55:01.008 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-2zc84" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:01.029504 env[1532]: 2025-08-13 00:55:01.008 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-2zc84" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0", GenerateName:"calico-apiserver-589fbcc97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c90ce12-f5c9-423c-8e01-a32bac086304", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589fbcc97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f", Pod:"calico-apiserver-589fbcc97d-2zc84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali942a3184d33", MAC:"22:dd:57:17:0a:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:01.029504 env[1532]: 2025-08-13 00:55:01.026 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f" Namespace="calico-apiserver" Pod="calico-apiserver-589fbcc97d-2zc84" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:55:01.041000 audit[4917]: NETFILTER_CFG table=filter:120 family=2 entries=61 op=nft_register_chain pid=4917 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:01.041000 audit[4917]: SYSCALL arch=c000003e syscall=46 success=yes exit=29016 a0=3 a1=7ffdb816b0a0 a2=0 a3=7ffdb816b08c items=0 ppid=3878 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:01.041000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:01.254934 env[1532]: time="2025-08-13T00:55:01.253918141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:01.254934 env[1532]: time="2025-08-13T00:55:01.253952841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:01.254934 env[1532]: time="2025-08-13T00:55:01.253965242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:01.254934 env[1532]: time="2025-08-13T00:55:01.254118243Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f pid=4925 runtime=io.containerd.runc.v2 Aug 13 00:55:01.255969 env[1532]: time="2025-08-13T00:55:01.255340955Z" level=info msg="RemovePodSandbox \"66c95a67742cf03b66cd35c9eaeb71645f27526a8b8e3496d284f2fabafc106e\" returns successfully" Aug 13 00:55:01.256837 env[1532]: time="2025-08-13T00:55:01.256809869Z" level=info msg="StopPodSandbox for \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\"" Aug 13 00:55:01.311577 env[1532]: time="2025-08-13T00:55:01.311464991Z" level=info msg="CreateContainer within sandbox \"e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b0c2f8a88627f6a8f69289ffbdf67968a967410caab98db4bbeb68f781afc53\"" Aug 13 00:55:01.315534 env[1532]: time="2025-08-13T00:55:01.313631311Z" level=info msg="StartContainer for \"8b0c2f8a88627f6a8f69289ffbdf67968a967410caab98db4bbeb68f781afc53\"" Aug 13 00:55:01.365954 systemd-networkd[1717]: cali703ae353144: Gained IPv6LL Aug 13 00:55:01.479484 env[1532]: time="2025-08-13T00:55:01.479426895Z" level=info msg="StartContainer for \"8b0c2f8a88627f6a8f69289ffbdf67968a967410caab98db4bbeb68f781afc53\" returns successfully" Aug 13 00:55:01.485038 env[1532]: time="2025-08-13T00:55:01.484998548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589fbcc97d-2zc84,Uid:2c90ce12-f5c9-423c-8e01-a32bac086304,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f\"" Aug 13 00:55:01.539091 kubelet[2605]: I0813 00:55:01.538813 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8nhmz" podStartSLOduration=57.538777162 podStartE2EDuration="57.538777162s" podCreationTimestamp="2025-08-13 00:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:01.52917267 +0000 UTC m=+62.431533749" watchObservedRunningTime="2025-08-13 00:55:01.538777162 +0000 UTC m=+62.441138241" Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.417 [WARNING][4956] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.417 [INFO][4956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.417 [INFO][4956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" iface="eth0" netns="" Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.417 [INFO][4956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.418 [INFO][4956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.522 [INFO][5000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" HandleID="k8s-pod-network.79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.525 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.525 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.544 [WARNING][5000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" HandleID="k8s-pod-network.79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.544 [INFO][5000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" HandleID="k8s-pod-network.79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.547 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:01.554520 env[1532]: 2025-08-13 00:55:01.552 [INFO][4956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:55:01.555907 env[1532]: time="2025-08-13T00:55:01.555852625Z" level=info msg="TearDown network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\" successfully" Aug 13 00:55:01.556068 env[1532]: time="2025-08-13T00:55:01.556038827Z" level=info msg="StopPodSandbox for \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\" returns successfully" Aug 13 00:55:01.557505 env[1532]: time="2025-08-13T00:55:01.557463540Z" level=info msg="RemovePodSandbox for \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\"" Aug 13 00:55:01.557695 env[1532]: time="2025-08-13T00:55:01.557628942Z" level=info msg="Forcibly stopping sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\"" Aug 13 00:55:01.561000 audit[5021]: NETFILTER_CFG table=filter:121 family=2 entries=12 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:01.561000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc25cda910 a2=0 a3=7ffc25cda8fc items=0 ppid=2726 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:01.561000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:01.566000 audit[5021]: NETFILTER_CFG table=nat:122 family=2 entries=46 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:01.566000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffc25cda910 a2=0 a3=7ffc25cda8fc items=0 ppid=2726 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:01.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:01.626104 systemd-networkd[1717]: calie596f395d92: Gained IPv6LL Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.636 [WARNING][5030] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" WorkloadEndpoint="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.636 [INFO][5030] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.636 [INFO][5030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" iface="eth0" netns="" Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.636 [INFO][5030] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.636 [INFO][5030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.671 [INFO][5037] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" HandleID="k8s-pod-network.79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.671 [INFO][5037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.672 [INFO][5037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.679 [WARNING][5037] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" HandleID="k8s-pod-network.79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.679 [INFO][5037] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" HandleID="k8s-pod-network.79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Workload="ci--3510.3.8--a--1859c445b4-k8s-whisker--7cc5458554--jzl57-eth0" Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.684 [INFO][5037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:01.686768 env[1532]: 2025-08-13 00:55:01.685 [INFO][5030] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247" Aug 13 00:55:01.687367 env[1532]: time="2025-08-13T00:55:01.686776675Z" level=info msg="TearDown network for sandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\" successfully" Aug 13 00:55:01.695765 env[1532]: time="2025-08-13T00:55:01.695719461Z" level=info msg="RemovePodSandbox \"79c16fc01583d3a749f00e3bd9bffb48b5b8ab02f81a108922880f11b4575247\" returns successfully" Aug 13 00:55:01.696280 env[1532]: time="2025-08-13T00:55:01.696248766Z" level=info msg="StopPodSandbox for \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\"" Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.779 [WARNING][5053] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5dfe102c-5690-449a-a336-a40d559d5b09", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d", Pod:"csi-node-driver-qf7t2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39d62b69845", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.780 [INFO][5053] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.780 [INFO][5053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" iface="eth0" netns="" Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.780 [INFO][5053] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.780 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.824 [INFO][5060] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" HandleID="k8s-pod-network.26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.824 [INFO][5060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.824 [INFO][5060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.831 [WARNING][5060] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" HandleID="k8s-pod-network.26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.831 [INFO][5060] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" HandleID="k8s-pod-network.26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.833 [INFO][5060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:01.838734 env[1532]: 2025-08-13 00:55:01.835 [INFO][5053] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:55:01.838734 env[1532]: time="2025-08-13T00:55:01.837636816Z" level=info msg="TearDown network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\" successfully" Aug 13 00:55:01.838734 env[1532]: time="2025-08-13T00:55:01.837676716Z" level=info msg="StopPodSandbox for \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\" returns successfully" Aug 13 00:55:01.838734 env[1532]: time="2025-08-13T00:55:01.838199721Z" level=info msg="RemovePodSandbox for \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\"" Aug 13 00:55:01.838734 env[1532]: time="2025-08-13T00:55:01.838262722Z" level=info msg="Forcibly stopping sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\"" Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.920 [WARNING][5075] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5dfe102c-5690-449a-a336-a40d559d5b09", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d", Pod:"csi-node-driver-qf7t2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39d62b69845", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.920 [INFO][5075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.921 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" iface="eth0" netns="" Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.921 [INFO][5075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.921 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.966 [INFO][5082] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" HandleID="k8s-pod-network.26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.966 [INFO][5082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.966 [INFO][5082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.973 [WARNING][5082] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" HandleID="k8s-pod-network.26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.973 [INFO][5082] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" HandleID="k8s-pod-network.26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Workload="ci--3510.3.8--a--1859c445b4-k8s-csi--node--driver--qf7t2-eth0" Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.975 [INFO][5082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:01.978436 env[1532]: 2025-08-13 00:55:01.977 [INFO][5075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3" Aug 13 00:55:01.979101 env[1532]: time="2025-08-13T00:55:01.978454761Z" level=info msg="TearDown network for sandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\" successfully" Aug 13 00:55:01.986845 env[1532]: time="2025-08-13T00:55:01.986799041Z" level=info msg="RemovePodSandbox \"26161b89130824c35741772e53df3d999aa9b6fc144db23985f572800f7923f3\" returns successfully" Aug 13 00:55:02.453343 systemd-networkd[1717]: cali942a3184d33: Gained IPv6LL Aug 13 00:55:02.455839 systemd-networkd[1717]: cali2804437750b: Gained IPv6LL Aug 13 00:55:02.650000 audit[5110]: NETFILTER_CFG table=filter:123 family=2 entries=12 op=nft_register_rule pid=5110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:02.650000 audit[5110]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcb4a79390 a2=0 a3=7ffcb4a7937c items=0 ppid=2726 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:02.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:02.845000 audit[5110]: NETFILTER_CFG table=nat:124 family=2 entries=58 op=nft_register_chain pid=5110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:02.845000 audit[5110]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7ffcb4a79390 a2=0 a3=7ffcb4a7937c items=0 ppid=2726 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:02.845000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:02.947472 env[1532]: time="2025-08-13T00:55:02.947422819Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:02.955166 env[1532]: time="2025-08-13T00:55:02.955127491Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:02.959513 env[1532]: time="2025-08-13T00:55:02.959480333Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:02.965129 env[1532]: time="2025-08-13T00:55:02.965093586Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:02.965636 env[1532]: time="2025-08-13T00:55:02.965605790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:55:02.967321 env[1532]: time="2025-08-13T00:55:02.967294606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:55:02.983116 env[1532]: time="2025-08-13T00:55:02.976460593Z" level=info msg="CreateContainer within sandbox \"6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:55:03.026833 env[1532]: time="2025-08-13T00:55:03.026774666Z" level=info msg="CreateContainer within sandbox \"6e996d54384ec66c0898ad13e736a2b640319d8a16fb6b5993dd1a9aa0a97138\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"226020fbefa058b226391c7a763482a99250c56940a2a54c02771981f6b67683\"" Aug 13 00:55:03.027499 env[1532]: time="2025-08-13T00:55:03.027467572Z" level=info msg="StartContainer for \"226020fbefa058b226391c7a763482a99250c56940a2a54c02771981f6b67683\"" Aug 13 00:55:03.136317 env[1532]: time="2025-08-13T00:55:03.136195689Z" level=info msg="StartContainer for \"226020fbefa058b226391c7a763482a99250c56940a2a54c02771981f6b67683\" returns successfully" Aug 13 00:55:03.532593 kubelet[2605]: I0813 00:55:03.532445 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7987d7d768-cg7mk" podStartSLOduration=25.287442599 podStartE2EDuration="31.532422993s" podCreationTimestamp="2025-08-13 00:54:32 +0000 UTC" firstStartedPulling="2025-08-13 00:54:56.721777707 +0000 UTC m=+57.624138686" lastFinishedPulling="2025-08-13 00:55:02.966758001 +0000 UTC m=+63.869119080" observedRunningTime="2025-08-13 00:55:03.531203482 +0000 UTC m=+64.433564461" watchObservedRunningTime="2025-08-13 00:55:03.532422993 +0000 UTC m=+64.434783972" Aug 13 00:55:03.975810 systemd[1]: run-containerd-runc-k8s.io-226020fbefa058b226391c7a763482a99250c56940a2a54c02771981f6b67683-runc.rpmH3v.mount: Deactivated successfully. Aug 13 00:55:04.841023 env[1532]: time="2025-08-13T00:55:04.840968147Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:04.847331 env[1532]: time="2025-08-13T00:55:04.847287205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:04.850770 env[1532]: time="2025-08-13T00:55:04.850724437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:04.854900 env[1532]: time="2025-08-13T00:55:04.854871176Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:04.855316 env[1532]: time="2025-08-13T00:55:04.855286379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:55:04.857726 env[1532]: time="2025-08-13T00:55:04.857240798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:55:04.858095 env[1532]: time="2025-08-13T00:55:04.858066105Z" level=info msg="CreateContainer within sandbox \"3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:55:04.899803 env[1532]: time="2025-08-13T00:55:04.899756191Z" level=info msg="CreateContainer within sandbox \"3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7c1e4e07c3d0aafea2dfae73f25756b3f6d6e0f6749e07a7005596fa08cb290d\"" Aug 13 00:55:04.900423 env[1532]: time="2025-08-13T00:55:04.900382097Z" level=info msg="StartContainer for \"7c1e4e07c3d0aafea2dfae73f25756b3f6d6e0f6749e07a7005596fa08cb290d\"" Aug 13 00:55:04.965896 env[1532]: time="2025-08-13T00:55:04.965837202Z" level=info msg="StartContainer for \"7c1e4e07c3d0aafea2dfae73f25756b3f6d6e0f6749e07a7005596fa08cb290d\" returns successfully" Aug 13 00:55:04.976977 systemd[1]: run-containerd-runc-k8s.io-7c1e4e07c3d0aafea2dfae73f25756b3f6d6e0f6749e07a7005596fa08cb290d-runc.SBho9H.mount: Deactivated successfully. Aug 13 00:55:08.511615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986938792.mount: Deactivated successfully. Aug 13 00:55:09.807886 env[1532]: time="2025-08-13T00:55:09.807822354Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:09.815655 env[1532]: time="2025-08-13T00:55:09.815610822Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:09.820667 env[1532]: time="2025-08-13T00:55:09.820630667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:09.824807 env[1532]: time="2025-08-13T00:55:09.824768703Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:09.825220 env[1532]: time="2025-08-13T00:55:09.825180607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:55:09.828075 env[1532]: time="2025-08-13T00:55:09.828039932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:55:09.829014 env[1532]: time="2025-08-13T00:55:09.828979040Z" level=info msg="CreateContainer within sandbox \"487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:55:09.861802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432642715.mount: Deactivated successfully. Aug 13 00:55:09.870765 env[1532]: time="2025-08-13T00:55:09.870721808Z" level=info msg="CreateContainer within sandbox \"487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13\"" Aug 13 00:55:09.871466 env[1532]: time="2025-08-13T00:55:09.871433114Z" level=info msg="StartContainer for \"0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13\"" Aug 13 00:55:09.988001 env[1532]: time="2025-08-13T00:55:09.987941642Z" level=info msg="StartContainer for \"0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13\" returns successfully" Aug 13 00:55:10.605996 kernel: kauditd_printk_skb: 20 callbacks suppressed Aug 13 00:55:10.606151 kernel: audit: type=1325 audit(1755046510.592:421): table=filter:125 family=2 entries=12 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:10.592000 audit[5268]: NETFILTER_CFG table=filter:125 family=2 entries=12 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:10.592000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff1f9589c0 a2=0 a3=7fff1f9589ac items=0 ppid=2726 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:10.626826 kernel: audit: type=1300 audit(1755046510.592:421): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff1f9589c0 a2=0 a3=7fff1f9589ac items=0 ppid=2726 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:10.592000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:10.636882 kernel: audit: type=1327 audit(1755046510.592:421): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:10.640000 audit[5268]: NETFILTER_CFG table=nat:126 family=2 entries=22 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:10.656873 kernel: audit: type=1325 audit(1755046510.640:422): table=nat:126 family=2 entries=22 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:10.656964 kernel: audit: type=1300 audit(1755046510.640:422): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff1f9589c0 a2=0 a3=7fff1f9589ac items=0 ppid=2726 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:10.640000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff1f9589c0 a2=0 a3=7fff1f9589ac items=0 ppid=2726 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:10.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:10.691939 kernel: audit: type=1327 audit(1755046510.640:422): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:11.675235 systemd[1]: run-containerd-runc-k8s.io-0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13-runc.ZEFTDt.mount: Deactivated successfully. Aug 13 00:55:12.584947 systemd[1]: run-containerd-runc-k8s.io-0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13-runc.FxxIQL.mount: Deactivated successfully. Aug 13 00:55:17.177548 env[1532]: time="2025-08-13T00:55:17.177507540Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:17.184345 env[1532]: time="2025-08-13T00:55:17.184305696Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:17.191207 env[1532]: time="2025-08-13T00:55:17.191171153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:17.200819 env[1532]: time="2025-08-13T00:55:17.200783732Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:17.201395 env[1532]: time="2025-08-13T00:55:17.201360737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:55:17.203085 env[1532]: time="2025-08-13T00:55:17.203048651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:55:17.203866 env[1532]: time="2025-08-13T00:55:17.203817357Z" level=info msg="CreateContainer within sandbox \"2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:55:17.243741 env[1532]: time="2025-08-13T00:55:17.243689886Z" level=info msg="CreateContainer within sandbox \"2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc3842a96136cb877e2351f8c2396f827e1bb4c825ff55daf0e0c08dc4cd3ecc\"" Aug 13 00:55:17.244576 env[1532]: time="2025-08-13T00:55:17.244536293Z" level=info msg="StartContainer for \"cc3842a96136cb877e2351f8c2396f827e1bb4c825ff55daf0e0c08dc4cd3ecc\"" Aug 13 00:55:17.299776 systemd[1]: run-containerd-runc-k8s.io-cc3842a96136cb877e2351f8c2396f827e1bb4c825ff55daf0e0c08dc4cd3ecc-runc.ouhk5N.mount: Deactivated successfully. Aug 13 00:55:17.372490 env[1532]: time="2025-08-13T00:55:17.372438748Z" level=info msg="StartContainer for \"cc3842a96136cb877e2351f8c2396f827e1bb4c825ff55daf0e0c08dc4cd3ecc\" returns successfully" Aug 13 00:55:17.552286 env[1532]: time="2025-08-13T00:55:17.552169530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:17.559341 env[1532]: time="2025-08-13T00:55:17.559304289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:17.570092 env[1532]: time="2025-08-13T00:55:17.570054978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:17.576349 env[1532]: time="2025-08-13T00:55:17.576311129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:17.577323 env[1532]: time="2025-08-13T00:55:17.577292037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:55:17.579723 env[1532]: time="2025-08-13T00:55:17.579693857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:55:17.580669 env[1532]: time="2025-08-13T00:55:17.580639365Z" level=info msg="CreateContainer within sandbox \"5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:55:17.586706 kubelet[2605]: I0813 00:55:17.586330 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-pkkr2" podStartSLOduration=37.612808991 podStartE2EDuration="46.586307112s" podCreationTimestamp="2025-08-13 00:54:31 +0000 UTC" firstStartedPulling="2025-08-13 00:55:00.853558502 +0000 UTC m=+61.755919581" lastFinishedPulling="2025-08-13 00:55:09.827056623 +0000 UTC m=+70.729417702" observedRunningTime="2025-08-13 00:55:10.559080633 +0000 UTC m=+71.461441612" watchObservedRunningTime="2025-08-13 00:55:17.586307112 +0000 UTC m=+78.488668091" Aug 13 00:55:17.621309 env[1532]: time="2025-08-13T00:55:17.621259300Z" level=info msg="CreateContainer within sandbox \"5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ccbd59667ec690847c969db6fa461816ffc7d993c6368f493a9863f66f4086e8\"" Aug 13 00:55:17.622510 env[1532]: time="2025-08-13T00:55:17.622477410Z" level=info msg="StartContainer for \"ccbd59667ec690847c969db6fa461816ffc7d993c6368f493a9863f66f4086e8\"" Aug 13 00:55:17.777000 audit[5413]: NETFILTER_CFG table=filter:127 family=2 entries=12 op=nft_register_rule pid=5413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:17.790888 kernel: audit: type=1325 audit(1755046517.777:423): table=filter:127 family=2 entries=12 op=nft_register_rule pid=5413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:17.777000 audit[5413]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffedce11710 a2=0 a3=7ffedce116fc items=0 ppid=2726 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:17.822890 kernel: audit: type=1300 audit(1755046517.777:423): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffedce11710 a2=0 a3=7ffedce116fc items=0 ppid=2726 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:17.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:17.866902 kernel: audit: type=1327 audit(1755046517.777:423): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:17.798000 audit[5413]: NETFILTER_CFG table=nat:128 family=2 entries=22 op=nft_register_rule pid=5413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:17.798000 audit[5413]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffedce11710 a2=0 a3=7ffedce116fc items=0 ppid=2726 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:17.897813 kernel: audit: type=1325 audit(1755046517.798:424): table=nat:128 family=2 entries=22 op=nft_register_rule pid=5413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:17.897977 kernel: audit: type=1300 audit(1755046517.798:424): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffedce11710 a2=0 a3=7ffedce116fc items=0 ppid=2726 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:17.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:17.919165 kernel: audit: type=1327 audit(1755046517.798:424): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:17.946834 env[1532]: time="2025-08-13T00:55:17.946771384Z" level=info msg="StartContainer for \"ccbd59667ec690847c969db6fa461816ffc7d993c6368f493a9863f66f4086e8\" returns successfully" Aug 13 00:55:18.584243 kubelet[2605]: I0813 00:55:18.584168 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-589fbcc97d-2zc84" podStartSLOduration=34.497665373 podStartE2EDuration="50.584145305s" podCreationTimestamp="2025-08-13 00:54:28 +0000 UTC" firstStartedPulling="2025-08-13 00:55:01.492313118 +0000 UTC m=+62.394674097" lastFinishedPulling="2025-08-13 00:55:17.57879305 +0000 UTC m=+78.481154029" observedRunningTime="2025-08-13 00:55:18.5835433 +0000 UTC m=+79.485904379" watchObservedRunningTime="2025-08-13 00:55:18.584145305 +0000 UTC m=+79.486506284" Aug 13 00:55:18.584498 kubelet[2605]: I0813 00:55:18.584309 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-589fbcc97d-sqb5w" podStartSLOduration=34.29848527 podStartE2EDuration="50.584298406s" podCreationTimestamp="2025-08-13 00:54:28 +0000 UTC" firstStartedPulling="2025-08-13 00:55:00.916470409 +0000 UTC m=+61.818831388" lastFinishedPulling="2025-08-13 00:55:17.202283545 +0000 UTC m=+78.104644524" observedRunningTime="2025-08-13 00:55:17.586686215 +0000 UTC m=+78.489047194" watchObservedRunningTime="2025-08-13 00:55:18.584298406 +0000 UTC m=+79.486659385" Aug 13 00:55:18.614000 audit[5437]: NETFILTER_CFG table=filter:129 family=2 entries=12 op=nft_register_rule pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:18.614000 audit[5437]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffdacf01d00 a2=0 a3=7ffdacf01cec items=0 ppid=2726 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:18.647345 kernel: audit: type=1325 audit(1755046518.614:425): table=filter:129 family=2 entries=12 op=nft_register_rule pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:18.647471 kernel: audit: type=1300 audit(1755046518.614:425): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffdacf01d00 a2=0 a3=7ffdacf01cec items=0 ppid=2726 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:18.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:18.658267 kernel: audit: type=1327 audit(1755046518.614:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:18.645000 audit[5437]: NETFILTER_CFG table=nat:130 family=2 entries=22 op=nft_register_rule pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:18.669447 kernel: audit: type=1325 audit(1755046518.645:426): table=nat:130 family=2 entries=22 op=nft_register_rule pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:18.645000 audit[5437]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdacf01d00 a2=0 a3=7ffdacf01cec items=0 ppid=2726 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:18.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:19.582878 kubelet[2605]: I0813 00:55:19.579118 2605 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:55:20.024649 env[1532]: time="2025-08-13T00:55:20.024532633Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:20.032692 env[1532]: time="2025-08-13T00:55:20.032579398Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:20.050644 env[1532]: time="2025-08-13T00:55:20.050597743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:20.072687 env[1532]: time="2025-08-13T00:55:20.072634421Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:20.073754 env[1532]: time="2025-08-13T00:55:20.073689929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:55:20.076868 env[1532]: time="2025-08-13T00:55:20.076817955Z" level=info msg="CreateContainer within sandbox \"3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:55:20.149077 env[1532]: time="2025-08-13T00:55:20.149009737Z" level=info msg="CreateContainer within sandbox \"3689ee1a7b377e2d853aa3facc6da8090977d3b78163547e9dc434b2c340aa9d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ee144194c284ea41379cbed253bc4e948b4015c2b261bbfea5a173ae1389e59b\"" Aug 13 00:55:20.149741 env[1532]: time="2025-08-13T00:55:20.149700943Z" level=info msg="StartContainer for \"ee144194c284ea41379cbed253bc4e948b4015c2b261bbfea5a173ae1389e59b\"" Aug 13 00:55:20.217740 systemd[1]: run-containerd-runc-k8s.io-ee144194c284ea41379cbed253bc4e948b4015c2b261bbfea5a173ae1389e59b-runc.39RygK.mount: Deactivated successfully. Aug 13 00:55:20.344476 env[1532]: time="2025-08-13T00:55:20.344423514Z" level=info msg="StartContainer for \"ee144194c284ea41379cbed253bc4e948b4015c2b261bbfea5a173ae1389e59b\" returns successfully" Aug 13 00:55:20.366174 kubelet[2605]: I0813 00:55:20.366135 2605 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:55:20.366174 kubelet[2605]: I0813 00:55:20.366175 2605 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:55:20.582027 kubelet[2605]: I0813 00:55:20.581989 2605 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:55:21.160647 kubelet[2605]: I0813 00:55:21.160576 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qf7t2" podStartSLOduration=26.900166612 podStartE2EDuration="49.160553189s" podCreationTimestamp="2025-08-13 00:54:32 +0000 UTC" firstStartedPulling="2025-08-13 00:54:57.814732664 +0000 UTC m=+58.717093643" lastFinishedPulling="2025-08-13 00:55:20.075119141 +0000 UTC m=+80.977480220" observedRunningTime="2025-08-13 00:55:20.612117473 +0000 UTC m=+81.514478452" watchObservedRunningTime="2025-08-13 00:55:21.160553189 +0000 UTC m=+82.062914168" Aug 13 00:55:21.199000 audit[5474]: NETFILTER_CFG table=filter:131 family=2 entries=11 op=nft_register_rule pid=5474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:21.199000 audit[5474]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd9ff11230 a2=0 a3=7ffd9ff1121c items=0 ppid=2726 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:21.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:21.202000 audit[5474]: NETFILTER_CFG table=nat:132 family=2 entries=29 op=nft_register_chain pid=5474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:21.202000 audit[5474]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffd9ff11230 a2=0 a3=7ffd9ff1121c items=0 ppid=2726 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:21.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:21.791000 audit[5476]: NETFILTER_CFG table=filter:133 family=2 entries=10 op=nft_register_rule pid=5476 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:21.791000 audit[5476]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffcafc64270 a2=0 a3=7ffcafc6425c items=0 ppid=2726 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:21.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:21.798000 audit[5476]: NETFILTER_CFG table=nat:134 family=2 entries=36 op=nft_register_chain pid=5476 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:21.798000 audit[5476]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffcafc64270 a2=0 a3=7ffcafc6425c items=0 ppid=2726 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:21.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:26.268970 systemd[1]: run-containerd-runc-k8s.io-0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13-runc.9ska5M.mount: Deactivated successfully. Aug 13 00:55:32.490792 systemd[1]: run-containerd-runc-k8s.io-994264523b6dc9d632b7e5ed1d5fc06d05c047e11318ed3b142dfb8736f5bc0e-runc.Vo0o6p.mount: Deactivated successfully. Aug 13 00:55:42.922760 systemd[1]: run-containerd-runc-k8s.io-226020fbefa058b226391c7a763482a99250c56940a2a54c02771981f6b67683-runc.cfIW7Z.mount: Deactivated successfully. Aug 13 00:55:42.948303 systemd[1]: run-containerd-runc-k8s.io-0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13-runc.yFfH5W.mount: Deactivated successfully. Aug 13 00:55:43.065000 audit[5569]: NETFILTER_CFG table=filter:135 family=2 entries=9 op=nft_register_rule pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:43.070397 kernel: kauditd_printk_skb: 14 callbacks suppressed Aug 13 00:55:43.070509 kernel: audit: type=1325 audit(1755046543.065:431): table=filter:135 family=2 entries=9 op=nft_register_rule pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:43.065000 audit[5569]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffdff9c57b0 a2=0 a3=7ffdff9c579c items=0 ppid=2726 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:43.096934 kernel: audit: type=1300 audit(1755046543.065:431): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffdff9c57b0 a2=0 a3=7ffdff9c579c items=0 ppid=2726 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:43.065000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:43.105982 kernel: audit: type=1327 audit(1755046543.065:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:43.107000 audit[5569]: NETFILTER_CFG table=nat:136 family=2 entries=31 op=nft_register_chain pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:43.107000 audit[5569]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffdff9c57b0 a2=0 a3=7ffdff9c579c items=0 ppid=2726 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:43.135824 kernel: audit: type=1325 audit(1755046543.107:432): table=nat:136 family=2 entries=31 op=nft_register_chain pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:43.135953 kernel: audit: type=1300 audit(1755046543.107:432): arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffdff9c57b0 a2=0 a3=7ffdff9c579c items=0 ppid=2726 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:43.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:43.146898 kernel: audit: type=1327 audit(1755046543.107:432): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:01.989995 env[1532]: time="2025-08-13T00:56:01.989934646Z" level=info msg="StopPodSandbox for \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\"" Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.026 [WARNING][5580] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6c80f539-1a56-46fa-b014-bcb6516c078a", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487", Pod:"goldmane-58fd7646b9-pkkr2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie596f395d92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.026 [INFO][5580] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.026 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" iface="eth0" netns="" Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.026 [INFO][5580] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.026 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.045 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" HandleID="k8s-pod-network.9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.045 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.045 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.051 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" HandleID="k8s-pod-network.9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.051 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" HandleID="k8s-pod-network.9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.052 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:02.054758 env[1532]: 2025-08-13 00:56:02.053 [INFO][5580] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:56:02.055508 env[1532]: time="2025-08-13T00:56:02.054793145Z" level=info msg="TearDown network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\" successfully" Aug 13 00:56:02.055508 env[1532]: time="2025-08-13T00:56:02.054830846Z" level=info msg="StopPodSandbox for \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\" returns successfully" Aug 13 00:56:02.055508 env[1532]: time="2025-08-13T00:56:02.055374758Z" level=info msg="RemovePodSandbox for \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\"" Aug 13 00:56:02.055508 env[1532]: time="2025-08-13T00:56:02.055417459Z" level=info msg="Forcibly stopping sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\"" Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.094 [WARNING][5602] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6c80f539-1a56-46fa-b014-bcb6516c078a", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"487f8988f6a1696f19d26e07ca20360f711d301c454f3ca543530dac94805487", Pod:"goldmane-58fd7646b9-pkkr2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie596f395d92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.094 [INFO][5602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.094 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" iface="eth0" netns="" Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.094 [INFO][5602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.094 [INFO][5602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.116 [INFO][5610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" HandleID="k8s-pod-network.9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.116 [INFO][5610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.116 [INFO][5610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.122 [WARNING][5610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" HandleID="k8s-pod-network.9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.122 [INFO][5610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" HandleID="k8s-pod-network.9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Workload="ci--3510.3.8--a--1859c445b4-k8s-goldmane--58fd7646b9--pkkr2-eth0" Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.123 [INFO][5610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:02.126278 env[1532]: 2025-08-13 00:56:02.125 [INFO][5602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02" Aug 13 00:56:02.126984 env[1532]: time="2025-08-13T00:56:02.126319295Z" level=info msg="TearDown network for sandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\" successfully" Aug 13 00:56:02.136152 env[1532]: time="2025-08-13T00:56:02.134024472Z" level=info msg="RemovePodSandbox \"9e477be567e22e746e6f686adcfcdb7480a808389b352a6b9ca6d526b2dc3f02\" returns successfully" Aug 13 00:56:02.136513 env[1532]: time="2025-08-13T00:56:02.136483729Z" level=info msg="StopPodSandbox for \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\"" Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.171 [WARNING][5624] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0", GenerateName:"calico-apiserver-589fbcc97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c90ce12-f5c9-423c-8e01-a32bac086304", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589fbcc97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f", Pod:"calico-apiserver-589fbcc97d-2zc84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali942a3184d33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.171 [INFO][5624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.172 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" iface="eth0" netns="" Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.172 [INFO][5624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.172 [INFO][5624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.190 [INFO][5631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" HandleID="k8s-pod-network.eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.190 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.190 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.195 [WARNING][5631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" HandleID="k8s-pod-network.eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.195 [INFO][5631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" HandleID="k8s-pod-network.eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.197 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:02.199458 env[1532]: 2025-08-13 00:56:02.198 [INFO][5624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:56:02.200225 env[1532]: time="2025-08-13T00:56:02.199488882Z" level=info msg="TearDown network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\" successfully" Aug 13 00:56:02.200225 env[1532]: time="2025-08-13T00:56:02.199531783Z" level=info msg="StopPodSandbox for \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\" returns successfully" Aug 13 00:56:02.200225 env[1532]: time="2025-08-13T00:56:02.200046495Z" level=info msg="RemovePodSandbox for \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\"" Aug 13 00:56:02.200225 env[1532]: time="2025-08-13T00:56:02.200086696Z" level=info msg="Forcibly stopping sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\"" Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.235 [WARNING][5646] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0", GenerateName:"calico-apiserver-589fbcc97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c90ce12-f5c9-423c-8e01-a32bac086304", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589fbcc97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"5446bfc6729a574ee2e162c22ef6eeca75b41e0609d6de1a25b2556f4eef0d7f", Pod:"calico-apiserver-589fbcc97d-2zc84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali942a3184d33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.235 [INFO][5646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.235 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" iface="eth0" netns="" Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.235 [INFO][5646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.235 [INFO][5646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.254 [INFO][5654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" HandleID="k8s-pod-network.eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.254 [INFO][5654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.254 [INFO][5654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.260 [WARNING][5654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" HandleID="k8s-pod-network.eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.260 [INFO][5654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" HandleID="k8s-pod-network.eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--2zc84-eth0" Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.261 [INFO][5654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:02.263742 env[1532]: 2025-08-13 00:56:02.262 [INFO][5646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47" Aug 13 00:56:02.263742 env[1532]: time="2025-08-13T00:56:02.263689863Z" level=info msg="TearDown network for sandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\" successfully" Aug 13 00:56:02.274083 env[1532]: time="2025-08-13T00:56:02.274038002Z" level=info msg="RemovePodSandbox \"eeba2cd6471749a16ff1956865941f80ebd2ef5d7a6d5b2376b86ea4e3749d47\" returns successfully" Aug 13 00:56:02.274546 env[1532]: time="2025-08-13T00:56:02.274515413Z" level=info msg="StopPodSandbox for \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\"" Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.308 [WARNING][5669] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"30e30d82-15c7-47b1-9012-021e8bd25177", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e", Pod:"coredns-7c65d6cfc9-8nhmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2804437750b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.308 [INFO][5669] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.309 [INFO][5669] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" iface="eth0" netns="" Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.309 [INFO][5669] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.309 [INFO][5669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.334 [INFO][5676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" HandleID="k8s-pod-network.46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.335 [INFO][5676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.335 [INFO][5676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.340 [WARNING][5676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" HandleID="k8s-pod-network.46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.341 [INFO][5676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" HandleID="k8s-pod-network.46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.342 [INFO][5676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:02.344844 env[1532]: 2025-08-13 00:56:02.343 [INFO][5669] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:56:02.345540 env[1532]: time="2025-08-13T00:56:02.344879936Z" level=info msg="TearDown network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\" successfully" Aug 13 00:56:02.345540 env[1532]: time="2025-08-13T00:56:02.344918537Z" level=info msg="StopPodSandbox for \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\" returns successfully" Aug 13 00:56:02.345540 env[1532]: time="2025-08-13T00:56:02.345418648Z" level=info msg="RemovePodSandbox for \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\"" Aug 13 00:56:02.345540 env[1532]: time="2025-08-13T00:56:02.345461049Z" level=info msg="Forcibly stopping sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\"" Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.379 [WARNING][5691] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"30e30d82-15c7-47b1-9012-021e8bd25177", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"e23d6ac4002326ff3b13289fca435232e963d408cececf9acbeafea55104aa7e", Pod:"coredns-7c65d6cfc9-8nhmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2804437750b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.379 [INFO][5691] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.379 [INFO][5691] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" iface="eth0" netns="" Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.379 [INFO][5691] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.379 [INFO][5691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.400 [INFO][5698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" HandleID="k8s-pod-network.46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.401 [INFO][5698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.401 [INFO][5698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.407 [WARNING][5698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" HandleID="k8s-pod-network.46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.407 [INFO][5698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" HandleID="k8s-pod-network.46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Workload="ci--3510.3.8--a--1859c445b4-k8s-coredns--7c65d6cfc9--8nhmz-eth0" Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.408 [INFO][5698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:02.411269 env[1532]: 2025-08-13 00:56:02.410 [INFO][5691] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a" Aug 13 00:56:02.412816 env[1532]: time="2025-08-13T00:56:02.411295568Z" level=info msg="TearDown network for sandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\" successfully" Aug 13 00:56:02.428664 env[1532]: time="2025-08-13T00:56:02.428611167Z" level=info msg="RemovePodSandbox \"46233dc2e79491cfe43a12efc9b5964d14cf259192d83636099649d812abb13a\" returns successfully" Aug 13 00:56:02.429181 env[1532]: time="2025-08-13T00:56:02.429151280Z" level=info msg="StopPodSandbox for \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\"" Aug 13 00:56:02.509149 systemd[1]: run-containerd-runc-k8s.io-994264523b6dc9d632b7e5ed1d5fc06d05c047e11318ed3b142dfb8736f5bc0e-runc.OWZpz9.mount: Deactivated successfully. Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.470 [WARNING][5712] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0", GenerateName:"calico-apiserver-589fbcc97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"27000197-aa78-41c4-95cb-9d77cedc6876", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589fbcc97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb", Pod:"calico-apiserver-589fbcc97d-sqb5w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali703ae353144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.471 [INFO][5712] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.471 [INFO][5712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" iface="eth0" netns="" Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.471 [INFO][5712] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.471 [INFO][5712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.520 [INFO][5729] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" HandleID="k8s-pod-network.dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.520 [INFO][5729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.520 [INFO][5729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.534 [WARNING][5729] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" HandleID="k8s-pod-network.dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.534 [INFO][5729] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" HandleID="k8s-pod-network.dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.536 [INFO][5729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:02.542246 env[1532]: 2025-08-13 00:56:02.539 [INFO][5712] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:56:02.543306 env[1532]: time="2025-08-13T00:56:02.542210987Z" level=info msg="TearDown network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\" successfully" Aug 13 00:56:02.543401 env[1532]: time="2025-08-13T00:56:02.543315713Z" level=info msg="StopPodSandbox for \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\" returns successfully" Aug 13 00:56:02.543991 env[1532]: time="2025-08-13T00:56:02.543952628Z" level=info msg="RemovePodSandbox for \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\"" Aug 13 00:56:02.544175 env[1532]: time="2025-08-13T00:56:02.544128932Z" level=info msg="Forcibly stopping sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\"" Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.607 [WARNING][5754] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0", GenerateName:"calico-apiserver-589fbcc97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"27000197-aa78-41c4-95cb-9d77cedc6876", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589fbcc97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-1859c445b4", ContainerID:"2c0ec0b1ca4e5b2602c03b8ee403edce875747608e0fefc18326dcc9dd6b5feb", Pod:"calico-apiserver-589fbcc97d-sqb5w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali703ae353144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.608 [INFO][5754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.608 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" iface="eth0" netns="" Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.608 [INFO][5754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.608 [INFO][5754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.632 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" HandleID="k8s-pod-network.dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.632 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.632 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.638 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" HandleID="k8s-pod-network.dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.639 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" HandleID="k8s-pod-network.dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Workload="ci--3510.3.8--a--1859c445b4-k8s-calico--apiserver--589fbcc97d--sqb5w-eth0" Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.640 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:02.643024 env[1532]: 2025-08-13 00:56:02.641 [INFO][5754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211" Aug 13 00:56:02.643619 env[1532]: time="2025-08-13T00:56:02.643583826Z" level=info msg="TearDown network for sandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\" successfully" Aug 13 00:56:02.650959 env[1532]: time="2025-08-13T00:56:02.650919495Z" level=info msg="RemovePodSandbox \"dddc2f361c1035e2c62dbdbc2b62ebbb2da80f423e10982b9a575f2b78ba9211\" returns successfully" Aug 13 00:56:03.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.17:22-10.200.16.10:48818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:03.680770 systemd[1]: Started sshd@7-10.200.4.17:22-10.200.16.10:48818.service. Aug 13 00:56:03.698911 kernel: audit: type=1130 audit(1755046563.680:433): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.17:22-10.200.16.10:48818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:04.276000 audit[5771]: USER_ACCT pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.295419 sshd[5771]: Accepted publickey for core from 10.200.16.10 port 48818 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:04.295882 kernel: audit: type=1101 audit(1755046564.276:434): pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.295936 sshd[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:04.294000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.304153 systemd[1]: Started session-10.scope. Aug 13 00:56:04.305264 systemd-logind[1516]: New session 10 of user core. Aug 13 00:56:04.314934 kernel: audit: type=1103 audit(1755046564.294:435): pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.326882 kernel: audit: type=1006 audit(1755046564.294:436): pid=5771 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Aug 13 00:56:04.326966 kernel: audit: type=1300 audit(1755046564.294:436): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5cae05a0 a2=3 a3=0 items=0 ppid=1 pid=5771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:04.294000 audit[5771]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5cae05a0 a2=3 a3=0 items=0 ppid=1 pid=5771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:04.294000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:04.345889 kernel: audit: type=1327 audit(1755046564.294:436): proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:04.345973 kernel: audit: type=1105 audit(1755046564.307:437): pid=5771 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.307000 audit[5771]: USER_START pid=5771 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.307000 audit[5774]: CRED_ACQ pid=5774 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.381727 kernel: audit: type=1103 audit(1755046564.307:438): pid=5774 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.888112 sshd[5771]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:04.890000 audit[5771]: USER_END pid=5771 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.893055 systemd[1]: sshd@7-10.200.4.17:22-10.200.16.10:48818.service: Deactivated successfully. Aug 13 00:56:04.893933 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:56:04.901221 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:56:04.902175 systemd-logind[1516]: Removed session 10. Aug 13 00:56:04.890000 audit[5771]: CRED_DISP pid=5771 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.923582 kernel: audit: type=1106 audit(1755046564.890:439): pid=5771 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.923708 kernel: audit: type=1104 audit(1755046564.890:440): pid=5771 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:04.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.17:22-10.200.16.10:48818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:09.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.17:22-10.200.16.10:48824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:09.985526 systemd[1]: Started sshd@8-10.200.4.17:22-10.200.16.10:48824.service. Aug 13 00:56:09.990693 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:56:09.990790 kernel: audit: type=1130 audit(1755046569.984:442): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.17:22-10.200.16.10:48824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:10.576851 sshd[5813]: Accepted publickey for core from 10.200.16.10 port 48824 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:10.575000 audit[5813]: USER_ACCT pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:10.597881 kernel: audit: type=1101 audit(1755046570.575:443): pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:10.599112 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:10.597000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:10.617277 kernel: audit: type=1103 audit(1755046570.597:444): pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:10.619035 systemd[1]: Started session-11.scope. Aug 13 00:56:10.624935 systemd-logind[1516]: New session 11 of user core. Aug 13 00:56:10.630475 kernel: audit: type=1006 audit(1755046570.597:445): pid=5813 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Aug 13 00:56:10.597000 audit[5813]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2804cc80 a2=3 a3=0 items=0 ppid=1 pid=5813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:10.655334 kernel: audit: type=1300 audit(1755046570.597:445): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2804cc80 a2=3 a3=0 items=0 ppid=1 pid=5813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:10.655430 kernel: audit: type=1327 audit(1755046570.597:445): proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:10.597000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:10.629000 audit[5813]: USER_START pid=5813 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:10.677568 kernel: audit: type=1105 audit(1755046570.629:446): pid=5813 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:10.634000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:10.691673 kernel: audit: type=1103 audit(1755046570.634:447): pid=5816 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:11.066655 sshd[5813]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:11.067000 audit[5813]: USER_END pid=5813 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:11.069920 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:56:11.071347 systemd[1]: sshd@8-10.200.4.17:22-10.200.16.10:48824.service: Deactivated successfully. Aug 13 00:56:11.072225 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:56:11.073805 systemd-logind[1516]: Removed session 11. Aug 13 00:56:11.067000 audit[5813]: CRED_DISP pid=5813 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:11.098970 kernel: audit: type=1106 audit(1755046571.067:448): pid=5813 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:11.099058 kernel: audit: type=1104 audit(1755046571.067:449): pid=5813 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:11.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.17:22-10.200.16.10:48824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:12.961139 systemd[1]: run-containerd-runc-k8s.io-0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13-runc.K49fqO.mount: Deactivated successfully. Aug 13 00:56:16.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.17:22-10.200.16.10:54848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:16.166014 systemd[1]: Started sshd@9-10.200.4.17:22-10.200.16.10:54848.service. Aug 13 00:56:16.171166 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:56:16.171236 kernel: audit: type=1130 audit(1755046576.164:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.17:22-10.200.16.10:54848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:16.755000 audit[5869]: USER_ACCT pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:16.774907 kernel: audit: type=1101 audit(1755046576.755:452): pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:16.774964 sshd[5869]: Accepted publickey for core from 10.200.16.10 port 54848 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:16.775224 sshd[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:16.772000 audit[5869]: CRED_ACQ pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:16.779979 systemd-logind[1516]: New session 12 of user core. Aug 13 00:56:16.781517 systemd[1]: Started session-12.scope. Aug 13 00:56:16.801128 kernel: audit: type=1103 audit(1755046576.772:453): pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:16.801233 kernel: audit: type=1006 audit(1755046576.772:454): pid=5869 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Aug 13 00:56:16.801260 kernel: audit: type=1300 audit(1755046576.772:454): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9418c240 a2=3 a3=0 items=0 ppid=1 pid=5869 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:16.772000 audit[5869]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9418c240 a2=3 a3=0 items=0 ppid=1 pid=5869 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:16.772000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:16.822006 kernel: audit: type=1327 audit(1755046576.772:454): proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:16.822110 kernel: audit: type=1105 audit(1755046576.784:455): pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:16.784000 audit[5869]: USER_START pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:16.787000 audit[5871]: CRED_ACQ pid=5871 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:16.853376 kernel: audit: type=1103 audit(1755046576.787:456): pid=5871 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:17.240488 sshd[5869]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:17.240000 audit[5869]: USER_END pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:17.243833 systemd[1]: sshd@9-10.200.4.17:22-10.200.16.10:54848.service: Deactivated successfully. Aug 13 00:56:17.244731 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:56:17.252162 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:56:17.253173 systemd-logind[1516]: Removed session 12. Aug 13 00:56:17.262876 kernel: audit: type=1106 audit(1755046577.240:457): pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:17.262964 kernel: audit: type=1104 audit(1755046577.240:458): pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:17.240000 audit[5869]: CRED_DISP pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:17.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.17:22-10.200.16.10:54848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:17.337477 systemd[1]: Started sshd@10-10.200.4.17:22-10.200.16.10:54858.service. Aug 13 00:56:17.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.17:22-10.200.16.10:54858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:17.924000 audit[5882]: USER_ACCT pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:17.926923 sshd[5882]: Accepted publickey for core from 10.200.16.10 port 54858 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:17.928682 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:17.926000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:17.926000 audit[5882]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf4a59820 a2=3 a3=0 items=0 ppid=1 pid=5882 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:17.926000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:17.934271 systemd[1]: Started session-13.scope. Aug 13 00:56:17.934535 systemd-logind[1516]: New session 13 of user core. Aug 13 00:56:17.939000 audit[5882]: USER_START pid=5882 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:17.941000 audit[5885]: CRED_ACQ pid=5885 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:18.455146 sshd[5882]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:18.454000 audit[5882]: USER_END pid=5882 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:18.455000 audit[5882]: CRED_DISP pid=5882 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:18.458269 systemd[1]: sshd@10-10.200.4.17:22-10.200.16.10:54858.service: Deactivated successfully. Aug 13 00:56:18.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.17:22-10.200.16.10:54858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:18.459703 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:56:18.460415 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:56:18.461994 systemd-logind[1516]: Removed session 13. Aug 13 00:56:18.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.17:22-10.200.16.10:54874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:18.552570 systemd[1]: Started sshd@11-10.200.4.17:22-10.200.16.10:54874.service. Aug 13 00:56:19.142000 audit[5893]: USER_ACCT pid=5893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:19.144272 sshd[5893]: Accepted publickey for core from 10.200.16.10 port 54874 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:19.143000 audit[5893]: CRED_ACQ pid=5893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:19.143000 audit[5893]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc822f090 a2=3 a3=0 items=0 ppid=1 pid=5893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:19.143000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:19.146000 sshd[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:19.150999 systemd[1]: Started session-14.scope. Aug 13 00:56:19.151484 systemd-logind[1516]: New session 14 of user core. Aug 13 00:56:19.159000 audit[5893]: USER_START pid=5893 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:19.160000 audit[5896]: CRED_ACQ pid=5896 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:19.627143 sshd[5893]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:19.626000 audit[5893]: USER_END pid=5893 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:19.626000 audit[5893]: CRED_DISP pid=5893 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:19.630683 systemd[1]: sshd@11-10.200.4.17:22-10.200.16.10:54874.service: Deactivated successfully. Aug 13 00:56:19.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.17:22-10.200.16.10:54874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:19.632945 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:56:19.633149 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:56:19.634595 systemd-logind[1516]: Removed session 14. Aug 13 00:56:24.725890 systemd[1]: Started sshd@12-10.200.4.17:22-10.200.16.10:57160.service. Aug 13 00:56:24.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.17:22-10.200.16.10:57160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:24.738482 kernel: kauditd_printk_skb: 23 callbacks suppressed Aug 13 00:56:24.738559 kernel: audit: type=1130 audit(1755046584.725:478): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.17:22-10.200.16.10:57160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:25.322000 audit[5909]: USER_ACCT pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.325282 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:25.341154 kernel: audit: type=1101 audit(1755046585.322:479): pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.341186 sshd[5909]: Accepted publickey for core from 10.200.16.10 port 57160 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:25.324000 audit[5909]: CRED_ACQ pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.353081 systemd[1]: Started session-15.scope. Aug 13 00:56:25.353912 systemd-logind[1516]: New session 15 of user core. Aug 13 00:56:25.358907 kernel: audit: type=1103 audit(1755046585.324:480): pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.324000 audit[5909]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff301bcf70 a2=3 a3=0 items=0 ppid=1 pid=5909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:25.389032 kernel: audit: type=1006 audit(1755046585.324:481): pid=5909 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Aug 13 00:56:25.389125 kernel: audit: type=1300 audit(1755046585.324:481): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff301bcf70 a2=3 a3=0 items=0 ppid=1 pid=5909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:25.389165 kernel: audit: type=1327 audit(1755046585.324:481): proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:25.324000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:25.362000 audit[5909]: USER_START pid=5909 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.411093 kernel: audit: type=1105 audit(1755046585.362:482): pid=5909 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.411203 kernel: audit: type=1103 audit(1755046585.367:483): pid=5915 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.367000 audit[5915]: CRED_ACQ pid=5915 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.807412 sshd[5909]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:25.809000 audit[5909]: USER_END pid=5909 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.812097 systemd[1]: sshd@12-10.200.4.17:22-10.200.16.10:57160.service: Deactivated successfully. Aug 13 00:56:25.813152 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:56:25.820671 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:56:25.821701 systemd-logind[1516]: Removed session 15. Aug 13 00:56:25.809000 audit[5909]: CRED_DISP pid=5909 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.841713 kernel: audit: type=1106 audit(1755046585.809:484): pid=5909 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.841777 kernel: audit: type=1104 audit(1755046585.809:485): pid=5909 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:25.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.17:22-10.200.16.10:57160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:26.242212 systemd[1]: run-containerd-runc-k8s.io-0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13-runc.9ji1IC.mount: Deactivated successfully. Aug 13 00:56:30.936181 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:56:30.936339 kernel: audit: type=1130 audit(1755046590.914:487): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.17:22-10.200.16.10:43590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:30.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.17:22-10.200.16.10:43590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:30.915245 systemd[1]: Started sshd@13-10.200.4.17:22-10.200.16.10:43590.service. Aug 13 00:56:31.507000 audit[5945]: USER_ACCT pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.527638 sshd[5945]: Accepted publickey for core from 10.200.16.10 port 43590 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:31.528033 kernel: audit: type=1101 audit(1755046591.507:488): pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.526000 audit[5945]: CRED_ACQ pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.528358 sshd[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:31.537161 systemd[1]: Started session-16.scope. Aug 13 00:56:31.537912 systemd-logind[1516]: New session 16 of user core. Aug 13 00:56:31.546034 kernel: audit: type=1103 audit(1755046591.526:489): pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.526000 audit[5945]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff8dec430 a2=3 a3=0 items=0 ppid=1 pid=5945 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:31.556058 kernel: audit: type=1006 audit(1755046591.526:490): pid=5945 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Aug 13 00:56:31.556107 kernel: audit: type=1300 audit(1755046591.526:490): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff8dec430 a2=3 a3=0 items=0 ppid=1 pid=5945 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:31.526000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:31.570874 kernel: audit: type=1327 audit(1755046591.526:490): proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:31.542000 audit[5945]: USER_START pid=5945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.576880 kernel: audit: type=1105 audit(1755046591.542:491): pid=5945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.545000 audit[5947]: CRED_ACQ pid=5947 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.606343 kernel: audit: type=1103 audit(1755046591.545:492): pid=5947 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.994292 sshd[5945]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:31.994000 audit[5945]: USER_END pid=5945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.998028 systemd[1]: sshd@13-10.200.4.17:22-10.200.16.10:43590.service: Deactivated successfully. Aug 13 00:56:31.999057 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:56:32.005505 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:56:32.006465 systemd-logind[1516]: Removed session 16. Aug 13 00:56:31.994000 audit[5945]: CRED_DISP pid=5945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:32.027441 kernel: audit: type=1106 audit(1755046591.994:493): pid=5945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:32.027524 kernel: audit: type=1104 audit(1755046591.994:494): pid=5945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:31.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.17:22-10.200.16.10:43590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:32.467157 systemd[1]: run-containerd-runc-k8s.io-994264523b6dc9d632b7e5ed1d5fc06d05c047e11318ed3b142dfb8736f5bc0e-runc.bXWcXq.mount: Deactivated successfully. Aug 13 00:56:37.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.17:22-10.200.16.10:43594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:37.095012 systemd[1]: Started sshd@14-10.200.4.17:22-10.200.16.10:43594.service. Aug 13 00:56:37.099674 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:56:37.099772 kernel: audit: type=1130 audit(1755046597.094:496): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.17:22-10.200.16.10:43594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:37.696000 audit[6002]: USER_ACCT pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:37.698456 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:37.714247 kernel: audit: type=1101 audit(1755046597.696:497): pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:37.714291 sshd[6002]: Accepted publickey for core from 10.200.16.10 port 43594 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:37.697000 audit[6002]: CRED_ACQ pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:37.734950 kernel: audit: type=1103 audit(1755046597.697:498): pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:37.727873 systemd[1]: Started session-17.scope. Aug 13 00:56:37.728902 systemd-logind[1516]: New session 17 of user core. Aug 13 00:56:37.697000 audit[6002]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef37738d0 a2=3 a3=0 items=0 ppid=1 pid=6002 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:37.770921 kernel: audit: type=1006 audit(1755046597.697:499): pid=6002 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Aug 13 00:56:37.771045 kernel: audit: type=1300 audit(1755046597.697:499): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef37738d0 a2=3 a3=0 items=0 ppid=1 pid=6002 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:37.771078 kernel: audit: type=1327 audit(1755046597.697:499): proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:37.697000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:37.737000 audit[6002]: USER_START pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:37.792177 kernel: audit: type=1105 audit(1755046597.737:500): pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:37.739000 audit[6005]: CRED_ACQ pid=6005 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:37.805935 kernel: audit: type=1103 audit(1755046597.739:501): pid=6005 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:38.235798 sshd[6002]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:38.236000 audit[6002]: USER_END pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:38.236000 audit[6002]: CRED_DISP pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:38.256934 systemd[1]: sshd@14-10.200.4.17:22-10.200.16.10:43594.service: Deactivated successfully. Aug 13 00:56:38.258781 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:56:38.259310 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:56:38.260415 systemd-logind[1516]: Removed session 17. Aug 13 00:56:38.272965 kernel: audit: type=1106 audit(1755046598.236:502): pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:38.273092 kernel: audit: type=1104 audit(1755046598.236:503): pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:38.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.17:22-10.200.16.10:43594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:42.922564 systemd[1]: run-containerd-runc-k8s.io-226020fbefa058b226391c7a763482a99250c56940a2a54c02771981f6b67683-runc.fj9vfa.mount: Deactivated successfully. Aug 13 00:56:42.940611 systemd[1]: run-containerd-runc-k8s.io-0a82cab3a36d5b5ae02db32cf67c5f2dbaf543f279a81dc9de4b95d2c904ff13-runc.Mj8Vaz.mount: Deactivated successfully. Aug 13 00:56:43.333624 systemd[1]: Started sshd@15-10.200.4.17:22-10.200.16.10:46524.service. Aug 13 00:56:43.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.17:22-10.200.16.10:46524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:43.337835 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:56:43.337975 kernel: audit: type=1130 audit(1755046603.333:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.17:22-10.200.16.10:46524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:43.931000 audit[6056]: USER_ACCT pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:43.933228 sshd[6056]: Accepted publickey for core from 10.200.16.10 port 46524 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:43.950000 audit[6056]: CRED_ACQ pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:43.951794 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:43.961618 systemd[1]: Started session-18.scope. Aug 13 00:56:43.962622 systemd-logind[1516]: New session 18 of user core. Aug 13 00:56:43.969999 kernel: audit: type=1101 audit(1755046603.931:506): pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:43.970069 kernel: audit: type=1103 audit(1755046603.950:507): pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:43.970100 kernel: audit: type=1006 audit(1755046603.950:508): pid=6056 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Aug 13 00:56:43.950000 audit[6056]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe93fe9eb0 a2=3 a3=0 items=0 ppid=1 pid=6056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:43.994378 kernel: audit: type=1300 audit(1755046603.950:508): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe93fe9eb0 a2=3 a3=0 items=0 ppid=1 pid=6056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:43.950000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:44.000052 kernel: audit: type=1327 audit(1755046603.950:508): proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:43.968000 audit[6056]: USER_START pid=6056 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:44.000884 kernel: audit: type=1105 audit(1755046603.968:509): pid=6056 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:43.973000 audit[6059]: CRED_ACQ pid=6059 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:44.017876 kernel: audit: type=1103 audit(1755046603.973:510): pid=6059 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:44.422607 sshd[6056]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:44.423000 audit[6056]: USER_END pid=6056 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:44.426578 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:56:44.428092 systemd[1]: sshd@15-10.200.4.17:22-10.200.16.10:46524.service: Deactivated successfully. Aug 13 00:56:44.429093 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:56:44.430662 systemd-logind[1516]: Removed session 18. Aug 13 00:56:44.423000 audit[6056]: CRED_DISP pid=6056 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:44.456982 kernel: audit: type=1106 audit(1755046604.423:511): pid=6056 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:44.457074 kernel: audit: type=1104 audit(1755046604.423:512): pid=6056 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:44.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.17:22-10.200.16.10:46524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:44.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.17:22-10.200.16.10:46532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:44.520930 systemd[1]: Started sshd@16-10.200.4.17:22-10.200.16.10:46532.service. Aug 13 00:56:45.113000 audit[6068]: USER_ACCT pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:45.114771 sshd[6068]: Accepted publickey for core from 10.200.16.10 port 46532 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:45.115000 audit[6068]: CRED_ACQ pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:45.115000 audit[6068]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedf60eb80 a2=3 a3=0 items=0 ppid=1 pid=6068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:45.115000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:45.116797 sshd[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:45.121413 systemd-logind[1516]: New session 19 of user core. Aug 13 00:56:45.122077 systemd[1]: Started session-19.scope. Aug 13 00:56:45.127000 audit[6068]: USER_START pid=6068 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:45.129000 audit[6071]: CRED_ACQ pid=6071 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:45.632745 sshd[6068]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:45.633000 audit[6068]: USER_END pid=6068 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:45.633000 audit[6068]: CRED_DISP pid=6068 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:45.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.17:22-10.200.16.10:46532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:45.636643 systemd[1]: sshd@16-10.200.4.17:22-10.200.16.10:46532.service: Deactivated successfully. Aug 13 00:56:45.638789 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:56:45.639632 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:56:45.641482 systemd-logind[1516]: Removed session 19. Aug 13 00:56:45.729574 systemd[1]: Started sshd@17-10.200.4.17:22-10.200.16.10:46540.service. Aug 13 00:56:45.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.17:22-10.200.16.10:46540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:46.317000 audit[6078]: USER_ACCT pid=6078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:46.319050 sshd[6078]: Accepted publickey for core from 10.200.16.10 port 46540 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:46.319000 audit[6078]: CRED_ACQ pid=6078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:46.319000 audit[6078]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeceeb8390 a2=3 a3=0 items=0 ppid=1 pid=6078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:46.319000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:46.320787 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:46.325746 systemd[1]: Started session-20.scope. Aug 13 00:56:46.326091 systemd-logind[1516]: New session 20 of user core. Aug 13 00:56:46.333000 audit[6078]: USER_START pid=6078 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:46.335000 audit[6081]: CRED_ACQ pid=6081 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:48.449273 sshd[6078]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:48.457918 kernel: kauditd_printk_skb: 20 callbacks suppressed Aug 13 00:56:48.458038 kernel: audit: type=1106 audit(1755046608.450:529): pid=6078 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:48.450000 audit[6078]: USER_END pid=6078 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:48.458488 systemd[1]: sshd@17-10.200.4.17:22-10.200.16.10:46540.service: Deactivated successfully. Aug 13 00:56:48.459467 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:56:48.465004 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:56:48.466134 systemd-logind[1516]: Removed session 20. Aug 13 00:56:48.510448 kernel: audit: type=1104 audit(1755046608.450:530): pid=6078 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:48.450000 audit[6078]: CRED_DISP pid=6078 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:48.524895 kernel: audit: type=1131 audit(1755046608.457:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.17:22-10.200.16.10:46540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.17:22-10.200.16.10:46540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.461000 audit[6091]: NETFILTER_CFG table=filter:137 family=2 entries=8 op=nft_register_rule pid=6091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:48.537878 kernel: audit: type=1325 audit(1755046608.461:532): table=filter:137 family=2 entries=8 op=nft_register_rule pid=6091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:48.545430 systemd[1]: Started sshd@18-10.200.4.17:22-10.200.16.10:46556.service. Aug 13 00:56:48.461000 audit[6091]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd131e4940 a2=0 a3=7ffd131e492c items=0 ppid=2726 pid=6091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:48.569950 kernel: audit: type=1300 audit(1755046608.461:532): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd131e4940 a2=0 a3=7ffd131e492c items=0 ppid=2726 pid=6091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:48.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:48.580876 kernel: audit: type=1327 audit(1755046608.461:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:48.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.17:22-10.200.16.10:46556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.549000 audit[6091]: NETFILTER_CFG table=nat:138 family=2 entries=26 op=nft_register_rule pid=6091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:48.604877 kernel: audit: type=1130 audit(1755046608.541:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.17:22-10.200.16.10:46556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:48.604967 kernel: audit: type=1325 audit(1755046608.549:534): table=nat:138 family=2 entries=26 op=nft_register_rule pid=6091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:48.549000 audit[6091]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffd131e4940 a2=0 a3=7ffd131e492c items=0 ppid=2726 pid=6091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:48.622791 kernel: audit: type=1300 audit(1755046608.549:534): arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffd131e4940 a2=0 a3=7ffd131e492c items=0 ppid=2726 pid=6091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:48.623010 kernel: audit: type=1327 audit(1755046608.549:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:48.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:48.585000 audit[6097]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=6097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:48.585000 audit[6097]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff626f9440 a2=0 a3=7fff626f942c items=0 ppid=2726 pid=6097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:48.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:48.632000 audit[6097]: NETFILTER_CFG table=nat:140 family=2 entries=26 op=nft_register_rule pid=6097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:48.632000 audit[6097]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fff626f9440 a2=0 a3=0 items=0 ppid=2726 pid=6097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:48.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:49.141000 audit[6094]: USER_ACCT pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:49.142506 sshd[6094]: Accepted publickey for core from 10.200.16.10 port 46556 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:49.142000 audit[6094]: CRED_ACQ pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:49.142000 audit[6094]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe00d36390 a2=3 a3=0 items=0 ppid=1 pid=6094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:49.142000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:49.144056 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:49.149248 systemd[1]: Started session-21.scope. Aug 13 00:56:49.150005 systemd-logind[1516]: New session 21 of user core. Aug 13 00:56:49.155000 audit[6094]: USER_START pid=6094 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:49.156000 audit[6099]: CRED_ACQ pid=6099 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:49.729242 sshd[6094]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:49.730000 audit[6094]: USER_END pid=6094 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:49.730000 audit[6094]: CRED_DISP pid=6094 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:49.732816 systemd[1]: sshd@18-10.200.4.17:22-10.200.16.10:46556.service: Deactivated successfully. Aug 13 00:56:49.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.17:22-10.200.16.10:46556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:49.734498 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:56:49.734515 systemd-logind[1516]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:56:49.736087 systemd-logind[1516]: Removed session 21. Aug 13 00:56:49.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.17:22-10.200.16.10:46572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:49.826419 systemd[1]: Started sshd@19-10.200.4.17:22-10.200.16.10:46572.service. Aug 13 00:56:50.419000 audit[6107]: USER_ACCT pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:50.420806 sshd[6107]: Accepted publickey for core from 10.200.16.10 port 46572 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:50.420000 audit[6107]: CRED_ACQ pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:50.421000 audit[6107]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7cf31850 a2=3 a3=0 items=0 ppid=1 pid=6107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:50.421000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:50.422303 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:50.427138 systemd-logind[1516]: New session 22 of user core. Aug 13 00:56:50.427894 systemd[1]: Started session-22.scope. Aug 13 00:56:50.434000 audit[6107]: USER_START pid=6107 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:50.436000 audit[6110]: CRED_ACQ pid=6110 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:50.903049 sshd[6107]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:50.903000 audit[6107]: USER_END pid=6107 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:50.903000 audit[6107]: CRED_DISP pid=6107 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:50.906575 systemd[1]: sshd@19-10.200.4.17:22-10.200.16.10:46572.service: Deactivated successfully. Aug 13 00:56:50.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.17:22-10.200.16.10:46572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:50.908835 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:56:50.909554 systemd-logind[1516]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:56:50.911909 systemd-logind[1516]: Removed session 22. Aug 13 00:56:55.462000 audit[6121]: NETFILTER_CFG table=filter:141 family=2 entries=20 op=nft_register_rule pid=6121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:55.467207 kernel: kauditd_printk_skb: 27 callbacks suppressed Aug 13 00:56:55.467306 kernel: audit: type=1325 audit(1755046615.462:554): table=filter:141 family=2 entries=20 op=nft_register_rule pid=6121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:55.462000 audit[6121]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd65152be0 a2=0 a3=7ffd65152bcc items=0 ppid=2726 pid=6121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:55.498703 kernel: audit: type=1300 audit(1755046615.462:554): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd65152be0 a2=0 a3=7ffd65152bcc items=0 ppid=2726 pid=6121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:55.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:55.508884 kernel: audit: type=1327 audit(1755046615.462:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:55.511000 audit[6121]: NETFILTER_CFG table=nat:142 family=2 entries=110 op=nft_register_chain pid=6121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:55.524880 kernel: audit: type=1325 audit(1755046615.511:555): table=nat:142 family=2 entries=110 op=nft_register_chain pid=6121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:55.511000 audit[6121]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffd65152be0 a2=0 a3=7ffd65152bcc items=0 ppid=2726 pid=6121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:55.545886 kernel: audit: type=1300 audit(1755046615.511:555): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffd65152be0 a2=0 a3=7ffd65152bcc items=0 ppid=2726 pid=6121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:55.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:55.556866 kernel: audit: type=1327 audit(1755046615.511:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:56.000801 systemd[1]: Started sshd@20-10.200.4.17:22-10.200.16.10:44814.service. Aug 13 00:56:56.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.17:22-10.200.16.10:44814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:56.018877 kernel: audit: type=1130 audit(1755046616.000:556): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.17:22-10.200.16.10:44814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:56.592705 sshd[6123]: Accepted publickey for core from 10.200.16.10 port 44814 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:56.594440 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:56.591000 audit[6123]: USER_ACCT pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:56.593000 audit[6123]: CRED_ACQ pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:56.618329 systemd[1]: Started session-23.scope. Aug 13 00:56:56.619659 systemd-logind[1516]: New session 23 of user core. Aug 13 00:56:56.628589 kernel: audit: type=1101 audit(1755046616.591:557): pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:56.629796 kernel: audit: type=1103 audit(1755046616.593:558): pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:56.593000 audit[6123]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf1f5d950 a2=3 a3=0 items=0 ppid=1 pid=6123 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:56.593000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:56.629000 audit[6123]: USER_START pid=6123 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:56.634000 audit[6126]: CRED_ACQ pid=6126 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:56.645946 kernel: audit: type=1006 audit(1755046616.593:559): pid=6123 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Aug 13 00:56:57.122254 sshd[6123]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:57.122000 audit[6123]: USER_END pid=6123 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:57.122000 audit[6123]: CRED_DISP pid=6123 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:56:57.125434 systemd[1]: sshd@20-10.200.4.17:22-10.200.16.10:44814.service: Deactivated successfully. Aug 13 00:56:57.126924 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:56:57.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.17:22-10.200.16.10:44814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:57.127426 systemd-logind[1516]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:56:57.128431 systemd-logind[1516]: Removed session 23. Aug 13 00:57:02.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.17:22-10.200.16.10:35374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:02.219763 systemd[1]: Started sshd@21-10.200.4.17:22-10.200.16.10:35374.service. Aug 13 00:57:02.224481 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:57:02.224557 kernel: audit: type=1130 audit(1755046622.219:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.17:22-10.200.16.10:35374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:02.470189 systemd[1]: run-containerd-runc-k8s.io-994264523b6dc9d632b7e5ed1d5fc06d05c047e11318ed3b142dfb8736f5bc0e-runc.NfIqWy.mount: Deactivated successfully. Aug 13 00:57:02.816000 audit[6139]: USER_ACCT pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:02.817388 sshd[6139]: Accepted publickey for core from 10.200.16.10 port 35374 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:57:02.834936 kernel: audit: type=1101 audit(1755046622.816:566): pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:02.835157 sshd[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:02.833000 audit[6139]: CRED_ACQ pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:02.845849 systemd[1]: Started session-24.scope. Aug 13 00:57:02.846833 systemd-logind[1516]: New session 24 of user core. Aug 13 00:57:02.854894 kernel: audit: type=1103 audit(1755046622.833:567): pid=6139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:02.833000 audit[6139]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef7f94270 a2=3 a3=0 items=0 ppid=1 pid=6139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:02.864872 kernel: audit: type=1006 audit(1755046622.833:568): pid=6139 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Aug 13 00:57:02.864922 kernel: audit: type=1300 audit(1755046622.833:568): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef7f94270 a2=3 a3=0 items=0 ppid=1 pid=6139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:02.883442 kernel: audit: type=1327 audit(1755046622.833:568): proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:02.833000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:02.854000 audit[6139]: USER_START pid=6139 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:02.903296 kernel: audit: type=1105 audit(1755046622.854:569): pid=6139 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:02.854000 audit[6164]: CRED_ACQ pid=6164 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:02.917675 kernel: audit: type=1103 audit(1755046622.854:570): pid=6164 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:03.308000 audit[6139]: USER_END pid=6139 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:03.308000 audit[6139]: CRED_DISP pid=6139 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:03.316107 systemd[1]: sshd@21-10.200.4.17:22-10.200.16.10:35374.service: Deactivated successfully. Aug 13 00:57:03.308415 sshd[6139]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:03.316988 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:57:03.318243 systemd-logind[1516]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:57:03.319059 systemd-logind[1516]: Removed session 24. Aug 13 00:57:03.342418 kernel: audit: type=1106 audit(1755046623.308:571): pid=6139 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:03.342522 kernel: audit: type=1104 audit(1755046623.308:572): pid=6139 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:03.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.17:22-10.200.16.10:35374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:06.754267 systemd[1]: run-containerd-runc-k8s.io-226020fbefa058b226391c7a763482a99250c56940a2a54c02771981f6b67683-runc.w4Q8W0.mount: Deactivated successfully. Aug 13 00:57:08.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.17:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:08.408156 systemd[1]: Started sshd@22-10.200.4.17:22-10.200.16.10:35376.service. Aug 13 00:57:08.413386 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:57:08.413468 kernel: audit: type=1130 audit(1755046628.407:574): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.17:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:09.010486 sshd[6195]: Accepted publickey for core from 10.200.16.10 port 35376 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:57:09.011103 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:09.008000 audit[6195]: USER_ACCT pid=6195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.009000 audit[6195]: CRED_ACQ pid=6195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.032747 systemd[1]: Started session-25.scope. Aug 13 00:57:09.033843 systemd-logind[1516]: New session 25 of user core. Aug 13 00:57:09.046275 kernel: audit: type=1101 audit(1755046629.008:575): pid=6195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.046375 kernel: audit: type=1103 audit(1755046629.009:576): pid=6195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.055989 kernel: audit: type=1006 audit(1755046629.009:577): pid=6195 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Aug 13 00:57:09.009000 audit[6195]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9932d890 a2=3 a3=0 items=0 ppid=1 pid=6195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:09.072610 kernel: audit: type=1300 audit(1755046629.009:577): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9932d890 a2=3 a3=0 items=0 ppid=1 pid=6195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:09.072756 kernel: audit: type=1327 audit(1755046629.009:577): proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:09.009000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:09.034000 audit[6195]: USER_START pid=6195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.039000 audit[6197]: CRED_ACQ pid=6197 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.111118 kernel: audit: type=1105 audit(1755046629.034:578): pid=6195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.111239 kernel: audit: type=1103 audit(1755046629.039:579): pid=6197 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.539428 sshd[6195]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:09.539000 audit[6195]: USER_END pid=6195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.543569 systemd-logind[1516]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:57:09.545166 systemd[1]: sshd@22-10.200.4.17:22-10.200.16.10:35376.service: Deactivated successfully. Aug 13 00:57:09.546010 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:57:09.547406 systemd-logind[1516]: Removed session 25. Aug 13 00:57:09.559880 kernel: audit: type=1106 audit(1755046629.539:580): pid=6195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.539000 audit[6195]: CRED_DISP pid=6195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.581943 kernel: audit: type=1104 audit(1755046629.539:581): pid=6195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:09.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.17:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:12.931391 systemd[1]: run-containerd-runc-k8s.io-226020fbefa058b226391c7a763482a99250c56940a2a54c02771981f6b67683-runc.r2Sa1u.mount: Deactivated successfully. Aug 13 00:57:14.651225 systemd[1]: Started sshd@23-10.200.4.17:22-10.200.16.10:45048.service. Aug 13 00:57:14.674169 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:57:14.674271 kernel: audit: type=1130 audit(1755046634.650:583): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.17:22-10.200.16.10:45048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:14.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.17:22-10.200.16.10:45048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:15.241000 audit[6250]: USER_ACCT pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.260595 sshd[6250]: Accepted publickey for core from 10.200.16.10 port 45048 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:57:15.260998 kernel: audit: type=1101 audit(1755046635.241:584): pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.259000 audit[6250]: CRED_ACQ pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.261183 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:15.267333 systemd[1]: Started session-26.scope. Aug 13 00:57:15.268283 systemd-logind[1516]: New session 26 of user core. Aug 13 00:57:15.278878 kernel: audit: type=1103 audit(1755046635.259:585): pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.259000 audit[6250]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4dfe8170 a2=3 a3=0 items=0 ppid=1 pid=6250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:15.305272 kernel: audit: type=1006 audit(1755046635.259:586): pid=6250 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Aug 13 00:57:15.305347 kernel: audit: type=1300 audit(1755046635.259:586): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4dfe8170 a2=3 a3=0 items=0 ppid=1 pid=6250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:15.305373 kernel: audit: type=1327 audit(1755046635.259:586): proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:15.259000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:15.273000 audit[6250]: USER_START pid=6250 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.329207 kernel: audit: type=1105 audit(1755046635.273:587): pid=6250 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.329269 kernel: audit: type=1103 audit(1755046635.278:588): pid=6253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.278000 audit[6253]: CRED_ACQ pid=6253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.740242 sshd[6250]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:15.740000 audit[6250]: USER_END pid=6250 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.744161 systemd-logind[1516]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:57:15.745594 systemd[1]: sshd@23-10.200.4.17:22-10.200.16.10:45048.service: Deactivated successfully. Aug 13 00:57:15.746432 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:57:15.748036 systemd-logind[1516]: Removed session 26. Aug 13 00:57:15.741000 audit[6250]: CRED_DISP pid=6250 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.771741 kernel: audit: type=1106 audit(1755046635.740:589): pid=6250 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.771814 kernel: audit: type=1104 audit(1755046635.741:590): pid=6250 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:15.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.17:22-10.200.16.10:45048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:20.840675 systemd[1]: Started sshd@24-10.200.4.17:22-10.200.16.10:47536.service. Aug 13 00:57:20.861933 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:57:20.862037 kernel: audit: type=1130 audit(1755046640.839:592): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.17:22-10.200.16.10:47536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:20.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.17:22-10.200.16.10:47536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:21.430000 audit[6262]: USER_ACCT pid=6262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.448965 kernel: audit: type=1101 audit(1755046641.430:593): pid=6262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.433837 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:21.431000 audit[6262]: CRED_ACQ pid=6262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.449424 sshd[6262]: Accepted publickey for core from 10.200.16.10 port 47536 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:57:21.455044 systemd[1]: Started session-27.scope. Aug 13 00:57:21.456167 systemd-logind[1516]: New session 27 of user core. Aug 13 00:57:21.469654 kernel: audit: type=1103 audit(1755046641.431:594): pid=6262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.469746 kernel: audit: type=1006 audit(1755046641.431:595): pid=6262 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Aug 13 00:57:21.431000 audit[6262]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde4a28d50 a2=3 a3=0 items=0 ppid=1 pid=6262 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:21.476874 kernel: audit: type=1300 audit(1755046641.431:595): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde4a28d50 a2=3 a3=0 items=0 ppid=1 pid=6262 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:21.431000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:21.497877 kernel: audit: type=1327 audit(1755046641.431:595): proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:21.457000 audit[6262]: USER_START pid=6262 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.514880 kernel: audit: type=1105 audit(1755046641.457:596): pid=6262 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.514948 kernel: audit: type=1103 audit(1755046641.471:597): pid=6265 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.471000 audit[6265]: CRED_ACQ pid=6265 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.903077 sshd[6262]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:21.902000 audit[6262]: USER_END pid=6262 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.908874 systemd[1]: sshd@24-10.200.4.17:22-10.200.16.10:47536.service: Deactivated successfully. Aug 13 00:57:21.910549 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:57:21.911211 systemd-logind[1516]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:57:21.912285 systemd-logind[1516]: Removed session 27. Aug 13 00:57:21.921873 kernel: audit: type=1106 audit(1755046641.902:598): pid=6262 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.902000 audit[6262]: CRED_DISP pid=6262 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:21.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.17:22-10.200.16.10:47536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:21.936881 kernel: audit: type=1104 audit(1755046641.902:599): pid=6262 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:27.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.17:22-10.200.16.10:47544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:27.003082 systemd[1]: Started sshd@25-10.200.4.17:22-10.200.16.10:47544.service. Aug 13 00:57:27.008405 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:57:27.008484 kernel: audit: type=1130 audit(1755046647.001:601): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.17:22-10.200.16.10:47544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:27.598000 audit[6296]: USER_ACCT pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:27.615840 sshd[6296]: Accepted publickey for core from 10.200.16.10 port 47544 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:57:27.616201 kernel: audit: type=1101 audit(1755046647.598:602): pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:27.615691 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:27.599000 audit[6296]: CRED_ACQ pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:27.634892 systemd[1]: Started session-28.scope. Aug 13 00:57:27.636071 systemd-logind[1516]: New session 28 of user core. Aug 13 00:57:27.645339 kernel: audit: type=1103 audit(1755046647.599:603): pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:27.645423 kernel: audit: type=1006 audit(1755046647.599:604): pid=6296 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Aug 13 00:57:27.599000 audit[6296]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea6f964a0 a2=3 a3=0 items=0 ppid=1 pid=6296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:27.661441 kernel: audit: type=1300 audit(1755046647.599:604): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea6f964a0 a2=3 a3=0 items=0 ppid=1 pid=6296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:27.599000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:27.666888 kernel: audit: type=1327 audit(1755046647.599:604): proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:27.640000 audit[6296]: USER_START pid=6296 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:27.642000 audit[6299]: CRED_ACQ pid=6299 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:27.697249 kernel: audit: type=1105 audit(1755046647.640:605): pid=6296 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:27.697326 kernel: audit: type=1103 audit(1755046647.642:606): pid=6299 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:28.077113 sshd[6296]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:28.077000 audit[6296]: USER_END pid=6296 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:28.080288 systemd[1]: sshd@25-10.200.4.17:22-10.200.16.10:47544.service: Deactivated successfully. Aug 13 00:57:28.081163 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:57:28.082786 systemd-logind[1516]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:57:28.083742 systemd-logind[1516]: Removed session 28. Aug 13 00:57:28.077000 audit[6296]: CRED_DISP pid=6296 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:28.109489 kernel: audit: type=1106 audit(1755046648.077:607): pid=6296 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:28.109582 kernel: audit: type=1104 audit(1755046648.077:608): pid=6296 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:57:28.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.17:22-10.200.16.10:47544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'