Sep 13 00:56:15.147047 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:56:15.147087 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:56:15.147105 kernel: BIOS-provided physical RAM map: Sep 13 00:56:15.147117 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 13 00:56:15.147130 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 13 00:56:15.147142 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 13 00:56:15.147161 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 13 00:56:15.147175 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 13 00:56:15.147188 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd27afff] usable Sep 13 00:56:15.147200 kernel: BIOS-e820: [mem 0x00000000bd27b000-0x00000000bd284fff] ACPI data Sep 13 00:56:15.147214 kernel: BIOS-e820: [mem 0x00000000bd285000-0x00000000bf8ecfff] usable Sep 13 00:56:15.147227 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 13 00:56:15.147241 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 13 00:56:15.147254 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 13 00:56:15.147276 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 13 00:56:15.147291 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 13 00:56:15.147305 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 13 00:56:15.147319 kernel: NX (Execute Disable) protection: active Sep 13 00:56:15.147332 kernel: efi: EFI v2.70 by EDK II Sep 13 00:56:15.148480 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd27b018 Sep 13 00:56:15.148498 kernel: random: crng init done Sep 13 00:56:15.148513 kernel: SMBIOS 2.4 present. Sep 13 00:56:15.148535 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 13 00:56:15.148551 kernel: Hypervisor detected: KVM Sep 13 00:56:15.148566 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:56:15.148582 kernel: kvm-clock: cpu 0, msr 18f19f001, primary cpu clock Sep 13 00:56:15.148597 kernel: kvm-clock: using sched offset of 13651598469 cycles Sep 13 00:56:15.148632 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:56:15.148646 kernel: tsc: Detected 2299.998 MHz processor Sep 13 00:56:15.148661 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:56:15.148675 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:56:15.148690 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 13 00:56:15.148710 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:56:15.148725 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 13 00:56:15.148741 kernel: Using GB pages for direct mapping Sep 13 00:56:15.148756 kernel: Secure boot disabled Sep 13 00:56:15.148772 kernel: ACPI: Early table checksum verification disabled Sep 13 00:56:15.148787 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 13 00:56:15.148801 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 13 00:56:15.148817 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 13 00:56:15.148845 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 13 00:56:15.148861 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 13 00:56:15.148879 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 13 00:56:15.148905 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 13 00:56:15.148922 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 13 00:56:15.148938 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 13 00:56:15.148958 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 13 00:56:15.148975 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 13 00:56:15.148992 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 13 00:56:15.149008 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 13 00:56:15.149025 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 13 00:56:15.149041 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 13 00:56:15.149058 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 13 00:56:15.149075 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 13 00:56:15.149092 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 13 00:56:15.149112 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 13 00:56:15.149127 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 13 00:56:15.149143 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:56:15.149161 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:56:15.149177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:56:15.149194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 13 00:56:15.149212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 13 00:56:15.149229 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 13 00:56:15.149247 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 13 00:56:15.149267 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Sep 13 00:56:15.149285 kernel: Zone ranges: Sep 13 00:56:15.149302 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:56:15.149318 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:56:15.149335 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:56:15.149351 kernel: Movable zone start for each node Sep 13 00:56:15.149367 kernel: Early memory node ranges Sep 13 00:56:15.149384 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 13 00:56:15.149401 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 13 00:56:15.149422 kernel: node 0: [mem 0x0000000000100000-0x00000000bd27afff] Sep 13 00:56:15.149439 kernel: node 0: [mem 0x00000000bd285000-0x00000000bf8ecfff] Sep 13 00:56:15.149455 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 13 00:56:15.149472 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:56:15.149489 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 13 00:56:15.149506 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:56:15.149523 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 13 00:56:15.149540 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 13 00:56:15.149556 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Sep 13 00:56:15.149576 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 13 00:56:15.149591 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 13 00:56:15.158468 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:56:15.158498 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:56:15.158516 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:56:15.158534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:56:15.158551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:56:15.158567 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:56:15.158584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:56:15.158625 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:56:15.158642 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:56:15.158658 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:56:15.158675 kernel: Booting paravirtualized kernel on KVM Sep 13 00:56:15.158692 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:56:15.158709 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:56:15.158726 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:56:15.158743 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:56:15.158759 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:56:15.158780 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:56:15.158797 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:56:15.158814 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Sep 13 00:56:15.158829 kernel: Policy zone: Normal Sep 13 00:56:15.158849 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:56:15.158867 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:56:15.158883 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:56:15.158907 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:56:15.158924 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:56:15.158945 kernel: Memory: 7515424K/7860544K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 344860K reserved, 0K cma-reserved) Sep 13 00:56:15.158962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:56:15.158978 kernel: Kernel/User page tables isolation: enabled Sep 13 00:56:15.158994 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:56:15.159010 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:56:15.159026 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:56:15.159043 kernel: rcu: RCU event tracing is enabled. Sep 13 00:56:15.159060 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:56:15.159082 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:56:15.159114 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:56:15.159132 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:56:15.159153 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:56:15.159170 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:56:15.159185 kernel: Console: colour dummy device 80x25 Sep 13 00:56:15.159202 kernel: printk: console [ttyS0] enabled Sep 13 00:56:15.159220 kernel: ACPI: Core revision 20210730 Sep 13 00:56:15.159237 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:56:15.159255 kernel: x2apic enabled Sep 13 00:56:15.159276 kernel: Switched APIC routing to physical x2apic. Sep 13 00:56:15.159294 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 13 00:56:15.159312 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:56:15.159330 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 13 00:56:15.159347 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 13 00:56:15.159363 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 13 00:56:15.159381 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:56:15.159402 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:56:15.159420 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:56:15.159438 kernel: Spectre V2 : Mitigation: IBRS Sep 13 00:56:15.159455 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:56:15.159473 kernel: RETBleed: Mitigation: IBRS Sep 13 00:56:15.159491 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:56:15.159509 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Sep 13 00:56:15.159527 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:56:15.159544 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:56:15.159566 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:56:15.159584 kernel: active return thunk: its_return_thunk Sep 13 00:56:15.159601 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:56:15.159639 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:56:15.159655 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:56:15.159673 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:56:15.159691 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:56:15.159708 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:56:15.159726 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:56:15.159749 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:56:15.159765 kernel: LSM: Security Framework initializing Sep 13 00:56:15.159781 kernel: SELinux: Initializing. Sep 13 00:56:15.159797 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:56:15.159813 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:56:15.159831 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 13 00:56:15.159848 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 13 00:56:15.159865 kernel: signal: max sigframe size: 1776 Sep 13 00:56:15.159883 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:56:15.159912 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:56:15.159931 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:56:15.159949 kernel: x86: Booting SMP configuration: Sep 13 00:56:15.159967 kernel: .... node #0, CPUs: #1 Sep 13 00:56:15.159985 kernel: kvm-clock: cpu 1, msr 18f19f041, secondary cpu clock Sep 13 00:56:15.160004 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:56:15.160023 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:56:15.160041 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:56:15.160063 kernel: smpboot: Max logical packages: 1 Sep 13 00:56:15.160081 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 13 00:56:15.160099 kernel: devtmpfs: initialized Sep 13 00:56:15.160116 kernel: x86/mm: Memory block size: 128MB Sep 13 00:56:15.160134 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 13 00:56:15.160152 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:56:15.160170 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:56:15.160188 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:56:15.160206 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:56:15.160227 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:56:15.160245 kernel: audit: type=2000 audit(1757724973.742:1): state=initialized audit_enabled=0 res=1 Sep 13 00:56:15.160264 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:56:15.160281 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:56:15.160300 kernel: cpuidle: using governor menu Sep 13 00:56:15.160317 kernel: ACPI: bus type PCI registered Sep 13 00:56:15.160336 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:56:15.160354 kernel: dca service started, version 1.12.1 Sep 13 00:56:15.160371 kernel: PCI: Using configuration type 1 for base access Sep 13 00:56:15.160392 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:56:15.160410 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:56:15.160428 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:56:15.160446 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:56:15.160464 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:56:15.160482 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:56:15.160500 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:56:15.160518 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:56:15.160536 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:56:15.160558 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:56:15.160576 kernel: ACPI: Interpreter enabled Sep 13 00:56:15.160594 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:56:15.160630 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:56:15.160648 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:56:15.160666 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 13 00:56:15.160684 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:56:15.160955 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:56:15.161153 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 13 00:56:15.161177 kernel: PCI host bridge to bus 0000:00 Sep 13 00:56:15.161343 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:56:15.161498 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:56:15.161682 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:56:15.161837 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 13 00:56:15.161997 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:56:15.162188 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:56:15.162373 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 13 00:56:15.162555 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:56:15.170669 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:56:15.170902 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 13 00:56:15.171081 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 13 00:56:15.171268 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 13 00:56:15.171445 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:56:15.177101 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 13 00:56:15.177366 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 13 00:56:15.177576 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:56:15.177927 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 13 00:56:15.178126 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 13 00:56:15.178163 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:56:15.178182 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:56:15.178201 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:56:15.178219 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:56:15.178237 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:56:15.178255 kernel: iommu: Default domain type: Translated Sep 13 00:56:15.178271 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:56:15.178289 kernel: vgaarb: loaded Sep 13 00:56:15.178305 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:56:15.178327 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:56:15.178344 kernel: PTP clock support registered Sep 13 00:56:15.178361 kernel: Registered efivars operations Sep 13 00:56:15.178379 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:56:15.178396 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:56:15.178412 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 13 00:56:15.178430 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 13 00:56:15.178447 kernel: e820: reserve RAM buffer [mem 0xbd27b000-0xbfffffff] Sep 13 00:56:15.178464 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 13 00:56:15.178485 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 13 00:56:15.178502 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:56:15.178519 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:56:15.178537 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:56:15.178554 kernel: pnp: PnP ACPI init Sep 13 00:56:15.178571 kernel: pnp: PnP ACPI: found 7 devices Sep 13 00:56:15.178588 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:56:15.178693 kernel: NET: Registered PF_INET protocol family Sep 13 00:56:15.178719 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:56:15.178737 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:56:15.178754 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:56:15.178772 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:56:15.178788 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 13 00:56:15.178806 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:56:15.178824 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:56:15.178842 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:56:15.178860 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:56:15.178882 kernel: NET: Registered PF_XDP protocol family Sep 13 00:56:15.179071 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:56:15.179231 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:56:15.179385 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:56:15.179536 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 13 00:56:15.186843 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:56:15.186892 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:56:15.186920 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:56:15.186940 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 13 00:56:15.186967 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:56:15.186986 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:56:15.187004 kernel: clocksource: Switched to clocksource tsc Sep 13 00:56:15.187024 kernel: Initialise system trusted keyrings Sep 13 00:56:15.187042 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:56:15.187061 kernel: Key type asymmetric registered Sep 13 00:56:15.187078 kernel: Asymmetric key parser 'x509' registered Sep 13 00:56:15.187099 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:56:15.187118 kernel: io scheduler mq-deadline registered Sep 13 00:56:15.187136 kernel: io scheduler kyber registered Sep 13 00:56:15.187154 kernel: io scheduler bfq registered Sep 13 00:56:15.187172 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:56:15.187190 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:56:15.187374 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 13 00:56:15.187398 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 13 00:56:15.187574 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 13 00:56:15.187601 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:56:15.188888 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 13 00:56:15.188933 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:56:15.188953 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:56:15.188980 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:56:15.188999 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 13 00:56:15.189018 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 13 00:56:15.189200 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 13 00:56:15.189234 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:56:15.189254 kernel: i8042: Warning: Keylock active Sep 13 00:56:15.189272 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:56:15.189290 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:56:15.189465 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:56:15.191786 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:56:15.191989 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:56:14 UTC (1757724974) Sep 13 00:56:15.192152 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:56:15.192181 kernel: intel_pstate: CPU model not supported Sep 13 00:56:15.192202 kernel: pstore: Registered efi as persistent store backend Sep 13 00:56:15.192222 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:56:15.192238 kernel: Segment Routing with IPv6 Sep 13 00:56:15.192254 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:56:15.192271 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:56:15.192288 kernel: Key type dns_resolver registered Sep 13 00:56:15.192306 kernel: IPI shorthand broadcast: enabled Sep 13 00:56:15.192323 kernel: sched_clock: Marking stable (741745475, 152799698)->(929016175, -34471002) Sep 13 00:56:15.192344 kernel: registered taskstats version 1 Sep 13 00:56:15.192361 kernel: Loading compiled-in X.509 certificates Sep 13 00:56:15.192377 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:56:15.192392 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:56:15.192408 kernel: Key type .fscrypt registered Sep 13 00:56:15.192423 kernel: Key type fscrypt-provisioning registered Sep 13 00:56:15.192438 kernel: pstore: Using crash dump compression: deflate Sep 13 00:56:15.192454 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:56:15.192469 kernel: ima: No architecture policies found Sep 13 00:56:15.192491 kernel: clk: Disabling unused clocks Sep 13 00:56:15.192507 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:56:15.192524 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:56:15.192541 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:56:15.192557 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:56:15.192574 kernel: Run /init as init process Sep 13 00:56:15.192590 kernel: with arguments: Sep 13 00:56:15.192623 kernel: /init Sep 13 00:56:15.192641 kernel: with environment: Sep 13 00:56:15.192663 kernel: HOME=/ Sep 13 00:56:15.192678 kernel: TERM=linux Sep 13 00:56:15.192693 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:56:15.192713 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:56:15.192732 systemd[1]: Detected virtualization kvm. Sep 13 00:56:15.192748 systemd[1]: Detected architecture x86-64. Sep 13 00:56:15.192766 systemd[1]: Running in initrd. Sep 13 00:56:15.192788 systemd[1]: No hostname configured, using default hostname. Sep 13 00:56:15.192904 systemd[1]: Hostname set to . Sep 13 00:56:15.192926 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:56:15.192944 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:56:15.192962 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:56:15.192980 systemd[1]: Reached target cryptsetup.target. Sep 13 00:56:15.192998 systemd[1]: Reached target paths.target. Sep 13 00:56:15.193017 systemd[1]: Reached target slices.target. Sep 13 00:56:15.193041 systemd[1]: Reached target swap.target. Sep 13 00:56:15.193060 systemd[1]: Reached target timers.target. Sep 13 00:56:15.193080 systemd[1]: Listening on iscsid.socket. Sep 13 00:56:15.193099 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:56:15.193120 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:56:15.193138 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:56:15.193156 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:56:15.193173 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:56:15.193196 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:56:15.193214 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:56:15.193254 systemd[1]: Reached target sockets.target. Sep 13 00:56:15.193278 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:56:15.193297 systemd[1]: Finished network-cleanup.service. Sep 13 00:56:15.193316 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:56:15.193340 systemd[1]: Starting systemd-journald.service... Sep 13 00:56:15.193360 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:56:15.193378 systemd[1]: Starting systemd-resolved.service... Sep 13 00:56:15.193397 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:56:15.193417 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:56:15.193438 kernel: audit: type=1130 audit(1757724975.161:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.193458 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:56:15.193477 kernel: audit: type=1130 audit(1757724975.170:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.193497 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:56:15.193520 kernel: audit: type=1130 audit(1757724975.178:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.193539 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:56:15.193557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:56:15.193581 systemd-journald[189]: Journal started Sep 13 00:56:15.193757 systemd-journald[189]: Runtime Journal (/run/log/journal/79e37f81270ac927595ebe628b239327) is 8.0M, max 148.8M, 140.8M free. Sep 13 00:56:15.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.180066 systemd-modules-load[190]: Inserted module 'overlay' Sep 13 00:56:15.201152 systemd[1]: Started systemd-journald.service. Sep 13 00:56:15.201229 kernel: audit: type=1130 audit(1757724975.195:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.213064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:56:15.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.226684 kernel: audit: type=1130 audit(1757724975.211:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.242807 systemd-resolved[191]: Positive Trust Anchors: Sep 13 00:56:15.255779 kernel: audit: type=1130 audit(1757724975.245:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.242914 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:56:15.243302 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:56:15.243516 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:56:15.248288 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:56:15.260199 systemd-resolved[191]: Defaulting to hostname 'linux'. Sep 13 00:56:15.281747 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:56:15.281788 dracut-cmdline[205]: dracut-dracut-053 Sep 13 00:56:15.281788 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:56:15.296771 kernel: Bridge firewalling registered Sep 13 00:56:15.292833 systemd-modules-load[190]: Inserted module 'br_netfilter' Sep 13 00:56:15.305102 systemd[1]: Started systemd-resolved.service. Sep 13 00:56:15.327248 kernel: audit: type=1130 audit(1757724975.307:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.308913 systemd[1]: Reached target nss-lookup.target. Sep 13 00:56:15.330765 kernel: SCSI subsystem initialized Sep 13 00:56:15.351105 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:56:15.351185 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:56:15.353921 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:56:15.358947 systemd-modules-load[190]: Inserted module 'dm_multipath' Sep 13 00:56:15.360060 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:56:15.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.377627 kernel: audit: type=1130 audit(1757724975.371:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.373957 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:56:15.394308 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:56:15.403778 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:56:15.403823 kernel: audit: type=1130 audit(1757724975.393:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.421657 kernel: iscsi: registered transport (tcp) Sep 13 00:56:15.450787 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:56:15.450872 kernel: QLogic iSCSI HBA Driver Sep 13 00:56:15.496792 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:56:15.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.499121 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:56:15.559727 kernel: raid6: avx2x4 gen() 18241 MB/s Sep 13 00:56:15.576659 kernel: raid6: avx2x4 xor() 7587 MB/s Sep 13 00:56:15.593655 kernel: raid6: avx2x2 gen() 18049 MB/s Sep 13 00:56:15.611664 kernel: raid6: avx2x2 xor() 18139 MB/s Sep 13 00:56:15.628667 kernel: raid6: avx2x1 gen() 14022 MB/s Sep 13 00:56:15.649660 kernel: raid6: avx2x1 xor() 15952 MB/s Sep 13 00:56:15.670664 kernel: raid6: sse2x4 gen() 10848 MB/s Sep 13 00:56:15.691650 kernel: raid6: sse2x4 xor() 6601 MB/s Sep 13 00:56:15.712648 kernel: raid6: sse2x2 gen() 11922 MB/s Sep 13 00:56:15.733655 kernel: raid6: sse2x2 xor() 7178 MB/s Sep 13 00:56:15.754664 kernel: raid6: sse2x1 gen() 10417 MB/s Sep 13 00:56:15.780699 kernel: raid6: sse2x1 xor() 5138 MB/s Sep 13 00:56:15.780766 kernel: raid6: using algorithm avx2x4 gen() 18241 MB/s Sep 13 00:56:15.780789 kernel: raid6: .... xor() 7587 MB/s, rmw enabled Sep 13 00:56:15.785803 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:56:15.811653 kernel: xor: automatically using best checksumming function avx Sep 13 00:56:15.929657 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:56:15.941350 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:56:15.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.948000 audit: BPF prog-id=7 op=LOAD Sep 13 00:56:15.948000 audit: BPF prog-id=8 op=LOAD Sep 13 00:56:15.951222 systemd[1]: Starting systemd-udevd.service... Sep 13 00:56:15.967915 systemd-udevd[388]: Using default interface naming scheme 'v252'. Sep 13 00:56:15.975278 systemd[1]: Started systemd-udevd.service. Sep 13 00:56:15.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.991082 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:56:16.006843 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation Sep 13 00:56:16.048769 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:56:16.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:16.050045 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:56:16.120311 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:56:16.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:16.203636 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:56:16.217639 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 13 00:56:16.253640 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:56:16.300081 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:56:16.317042 kernel: AES CTR mode by8 optimization enabled Sep 13 00:56:16.344629 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 13 00:56:16.408015 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 13 00:56:16.408272 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 13 00:56:16.408490 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 13 00:56:16.408776 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:56:16.408993 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:56:16.409026 kernel: GPT:17805311 != 25165823 Sep 13 00:56:16.409049 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:56:16.409071 kernel: GPT:17805311 != 25165823 Sep 13 00:56:16.409097 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:56:16.409120 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:56:16.409144 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 13 00:56:16.473135 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (433) Sep 13 00:56:16.472887 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:56:16.488078 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:56:16.511826 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:56:16.512082 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:56:16.562843 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:56:16.564195 systemd[1]: Starting disk-uuid.service... Sep 13 00:56:16.587133 disk-uuid[507]: Primary Header is updated. Sep 13 00:56:16.587133 disk-uuid[507]: Secondary Entries is updated. Sep 13 00:56:16.587133 disk-uuid[507]: Secondary Header is updated. Sep 13 00:56:16.617746 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:56:16.625636 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:56:16.651632 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:56:17.643695 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:56:17.644209 disk-uuid[508]: The operation has completed successfully. Sep 13 00:56:17.715668 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:56:17.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:17.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:17.715812 systemd[1]: Finished disk-uuid.service. Sep 13 00:56:17.734170 systemd[1]: Starting verity-setup.service... Sep 13 00:56:17.763637 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:56:17.848573 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:56:17.851136 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:56:17.863265 systemd[1]: Finished verity-setup.service. Sep 13 00:56:17.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:17.957640 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:56:17.958517 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:56:17.958956 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:56:18.002838 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:56:18.002884 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:56:18.002904 kernel: BTRFS info (device sda6): has skinny extents Sep 13 00:56:17.959907 systemd[1]: Starting ignition-setup.service... Sep 13 00:56:18.015711 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:56:18.025242 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:56:18.047804 systemd[1]: Finished ignition-setup.service. Sep 13 00:56:18.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.057242 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:56:18.142713 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:56:18.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.142000 audit: BPF prog-id=9 op=LOAD Sep 13 00:56:18.144812 systemd[1]: Starting systemd-networkd.service... Sep 13 00:56:18.184456 systemd-networkd[682]: lo: Link UP Sep 13 00:56:18.184471 systemd-networkd[682]: lo: Gained carrier Sep 13 00:56:18.185741 systemd-networkd[682]: Enumeration completed Sep 13 00:56:18.185896 systemd[1]: Started systemd-networkd.service. Sep 13 00:56:18.186288 systemd-networkd[682]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:56:18.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.188683 systemd-networkd[682]: eth0: Link UP Sep 13 00:56:18.188691 systemd-networkd[682]: eth0: Gained carrier Sep 13 00:56:18.198095 systemd-networkd[682]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4.c.flatcar-212911.internal' to 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:56:18.198123 systemd-networkd[682]: eth0: DHCPv4 address 10.128.0.69/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 13 00:56:18.221123 systemd[1]: Reached target network.target. Sep 13 00:56:18.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.255036 systemd[1]: Starting iscsiuio.service... Sep 13 00:56:18.318316 iscsid[692]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:56:18.318316 iscsid[692]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 00:56:18.318316 iscsid[692]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:56:18.318316 iscsid[692]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:56:18.318316 iscsid[692]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:56:18.318316 iscsid[692]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:56:18.318316 iscsid[692]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:56:18.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.283952 systemd[1]: Started iscsiuio.service. Sep 13 00:56:18.365005 ignition[595]: Ignition 2.14.0 Sep 13 00:56:18.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.305373 systemd[1]: Starting iscsid.service... Sep 13 00:56:18.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.365022 ignition[595]: Stage: fetch-offline Sep 13 00:56:18.318063 systemd[1]: Started iscsid.service. Sep 13 00:56:18.365105 ignition[595]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:56:18.332298 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:56:18.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.365147 ignition[595]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 13 00:56:18.353295 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:56:18.382307 ignition[595]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:56:18.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.379761 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:56:18.382533 ignition[595]: parsed url from cmdline: "" Sep 13 00:56:18.406011 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:56:18.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.382539 ignition[595]: no config URL provided Sep 13 00:56:18.437912 systemd[1]: Reached target remote-fs.target. Sep 13 00:56:18.382547 ignition[595]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:56:18.452127 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:56:18.382560 ignition[595]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:56:18.475206 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:56:18.382570 ignition[595]: failed to fetch config: resource requires networking Sep 13 00:56:18.490188 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:56:18.382761 ignition[595]: Ignition finished successfully Sep 13 00:56:18.505141 systemd[1]: Starting ignition-fetch.service... Sep 13 00:56:18.517149 ignition[706]: Ignition 2.14.0 Sep 13 00:56:18.540352 unknown[706]: fetched base config from "system" Sep 13 00:56:18.517158 ignition[706]: Stage: fetch Sep 13 00:56:18.540364 unknown[706]: fetched base config from "system" Sep 13 00:56:18.517310 ignition[706]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:56:18.540372 unknown[706]: fetched user config from "gcp" Sep 13 00:56:18.517343 ignition[706]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 13 00:56:18.542746 systemd[1]: Finished ignition-fetch.service. Sep 13 00:56:18.527986 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:56:18.545082 systemd[1]: Starting ignition-kargs.service... Sep 13 00:56:18.528197 ignition[706]: parsed url from cmdline: "" Sep 13 00:56:18.571184 systemd[1]: Finished ignition-kargs.service. Sep 13 00:56:18.528203 ignition[706]: no config URL provided Sep 13 00:56:18.582366 systemd[1]: Starting ignition-disks.service... Sep 13 00:56:18.528210 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:56:18.604634 systemd[1]: Finished ignition-disks.service. Sep 13 00:56:18.528222 ignition[706]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:56:18.615238 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:56:18.528259 ignition[706]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 13 00:56:18.630919 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:56:18.533380 ignition[706]: GET result: OK Sep 13 00:56:18.637991 systemd[1]: Reached target local-fs.target. Sep 13 00:56:18.533524 ignition[706]: parsing config with SHA512: a0213247fa86c70f2ac092610b308cc2065522e5282bee06f62cd23c2ed1a193bb511e2cda6c433383676dad7d7b8155f19eacbbfd60f70531a463b1601b48ab Sep 13 00:56:18.659909 systemd[1]: Reached target sysinit.target. Sep 13 00:56:18.541105 ignition[706]: fetch: fetch complete Sep 13 00:56:18.683900 systemd[1]: Reached target basic.target. Sep 13 00:56:18.541112 ignition[706]: fetch: fetch passed Sep 13 00:56:18.707158 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:56:18.541166 ignition[706]: Ignition finished successfully Sep 13 00:56:18.558539 ignition[712]: Ignition 2.14.0 Sep 13 00:56:18.558550 ignition[712]: Stage: kargs Sep 13 00:56:18.558739 ignition[712]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:56:18.558772 ignition[712]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 13 00:56:18.568295 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:56:18.570046 ignition[712]: kargs: kargs passed Sep 13 00:56:18.570101 ignition[712]: Ignition finished successfully Sep 13 00:56:18.594194 ignition[718]: Ignition 2.14.0 Sep 13 00:56:18.594203 ignition[718]: Stage: disks Sep 13 00:56:18.594337 ignition[718]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:56:18.594368 ignition[718]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 13 00:56:18.602132 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:56:18.603595 ignition[718]: disks: disks passed Sep 13 00:56:18.603680 ignition[718]: Ignition finished successfully Sep 13 00:56:18.749960 systemd-fsck[726]: ROOT: clean, 629/1628000 files, 124065/1617920 blocks Sep 13 00:56:18.962589 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:56:18.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:18.977890 systemd[1]: Mounting sysroot.mount... Sep 13 00:56:19.008643 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:56:19.010010 systemd[1]: Mounted sysroot.mount. Sep 13 00:56:19.010366 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:56:19.038201 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:56:19.053250 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:56:19.053334 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:56:19.053383 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:56:19.074181 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:56:19.103764 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:56:19.132896 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (732) Sep 13 00:56:19.112059 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:56:19.160798 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:56:19.160842 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:56:19.160867 kernel: BTRFS info (device sda6): has skinny extents Sep 13 00:56:19.161138 initrd-setup-root[737]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:56:19.176848 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:56:19.176888 initrd-setup-root[745]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:56:19.193752 initrd-setup-root[753]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:56:19.186478 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:56:19.219869 initrd-setup-root[777]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:56:19.242516 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:56:19.282844 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:56:19.282886 kernel: audit: type=1130 audit(1757724979.241:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:19.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:19.249667 systemd[1]: Starting ignition-mount.service... Sep 13 00:56:19.290927 systemd[1]: Starting sysroot-boot.service... Sep 13 00:56:19.305171 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:56:19.305282 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:56:19.331766 ignition[798]: INFO : Ignition 2.14.0 Sep 13 00:56:19.331766 ignition[798]: INFO : Stage: mount Sep 13 00:56:19.331766 ignition[798]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:56:19.331766 ignition[798]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 13 00:56:19.427822 kernel: audit: type=1130 audit(1757724979.350:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:19.427879 kernel: audit: type=1130 audit(1757724979.381:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:19.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:19.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:19.338795 systemd[1]: Finished sysroot-boot.service. Sep 13 00:56:19.441796 ignition[798]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:56:19.441796 ignition[798]: INFO : mount: mount passed Sep 13 00:56:19.441796 ignition[798]: INFO : Ignition finished successfully Sep 13 00:56:19.501792 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (807) Sep 13 00:56:19.501832 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:56:19.501849 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:56:19.501864 kernel: BTRFS info (device sda6): has skinny extents Sep 13 00:56:19.501879 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:56:19.352418 systemd[1]: Finished ignition-mount.service. Sep 13 00:56:19.384518 systemd[1]: Starting ignition-files.service... Sep 13 00:56:19.439465 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:56:19.527083 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:56:19.555890 ignition[826]: INFO : Ignition 2.14.0 Sep 13 00:56:19.555890 ignition[826]: INFO : Stage: files Sep 13 00:56:19.569743 ignition[826]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:56:19.569743 ignition[826]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 13 00:56:19.569743 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:56:19.569743 ignition[826]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:56:19.621775 ignition[826]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:56:19.621775 ignition[826]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:56:19.621775 ignition[826]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:56:19.621775 ignition[826]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:56:19.621775 ignition[826]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:56:19.621775 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:56:19.621775 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:56:19.621775 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:56:19.621775 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:56:19.582142 unknown[826]: wrote ssh authorized keys file for user: core Sep 13 00:56:19.998861 systemd-networkd[682]: eth0: Gained IPv6LL Sep 13 00:56:20.963721 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:56:22.048849 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:56:22.065754 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Sep 13 00:56:22.065754 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:56:22.065754 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2393414538" Sep 13 00:56:22.065754 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2393414538": device or resource busy Sep 13 00:56:22.065754 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2393414538", trying btrfs: device or resource busy Sep 13 00:56:22.065754 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2393414538" Sep 13 00:56:22.065754 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2393414538" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem2393414538" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem2393414538" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem196135809" Sep 13 00:56:22.187784 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem196135809": device or resource busy Sep 13 00:56:22.187784 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem196135809", trying btrfs: device or resource busy Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem196135809" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem196135809" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem196135809" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem196135809" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Sep 13 00:56:22.187784 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:56:22.069332 systemd[1]: mnt-oem2393414538.mount: Deactivated successfully. Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:56:22.452791 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4105919508" Sep 13 00:56:22.452791 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4105919508": device or resource busy Sep 13 00:56:22.452791 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4105919508", trying btrfs: device or resource busy Sep 13 00:56:22.103919 systemd[1]: mnt-oem4105919508.mount: Deactivated successfully. Sep 13 00:56:22.704888 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4105919508" Sep 13 00:56:22.704888 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4105919508" Sep 13 00:56:22.704888 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem4105919508" Sep 13 00:56:22.704888 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem4105919508" Sep 13 00:56:22.704888 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Sep 13 00:56:22.704888 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:56:22.704888 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:56:22.704888 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Sep 13 00:56:22.966464 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:56:22.966464 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Sep 13 00:56:23.002829 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:56:23.002829 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1979110168" Sep 13 00:56:23.002829 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1979110168": device or resource busy Sep 13 00:56:23.002829 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1979110168", trying btrfs: device or resource busy Sep 13 00:56:23.002829 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1979110168" Sep 13 00:56:23.002829 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1979110168" Sep 13 00:56:23.002829 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem1979110168" Sep 13 00:56:23.002829 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem1979110168" Sep 13 00:56:23.002829 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Sep 13 00:56:23.002829 ignition[826]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:56:23.002829 ignition[826]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:56:23.002829 ignition[826]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Sep 13 00:56:23.002829 ignition[826]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Sep 13 00:56:23.002829 ignition[826]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Sep 13 00:56:23.002829 ignition[826]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Sep 13 00:56:23.002829 ignition[826]: INFO : files: op(1f): [started] processing unit "containerd.service" Sep 13 00:56:23.002829 ignition[826]: INFO : files: op(1f): op(20): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:56:23.504808 kernel: audit: type=1130 audit(1757724983.001:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.504975 kernel: audit: type=1130 audit(1757724983.094:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.504993 kernel: audit: type=1130 audit(1757724983.151:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.505012 kernel: audit: type=1131 audit(1757724983.151:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.505040 kernel: audit: type=1130 audit(1757724983.252:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.505056 kernel: audit: type=1131 audit(1757724983.252:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.505070 kernel: audit: type=1130 audit(1757724983.372:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:22.988347 systemd[1]: Finished ignition-files.service. Sep 13 00:56:23.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(1f): op(20): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(1f): [finished] processing unit "containerd.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(21): [started] processing unit "prepare-helm.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(21): op(22): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(21): op(22): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(21): [finished] processing unit "prepare-helm.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(25): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(26): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:56:23.520041 ignition[826]: INFO : files: op(26): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:56:23.520041 ignition[826]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:56:23.520041 ignition[826]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:56:23.520041 ignition[826]: INFO : files: files passed Sep 13 00:56:23.520041 ignition[826]: INFO : Ignition finished successfully Sep 13 00:56:23.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.013548 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:56:23.045089 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:56:23.892801 initrd-setup-root-after-ignition[849]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:56:23.046384 systemd[1]: Starting ignition-quench.service... Sep 13 00:56:23.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.928882 iscsid[692]: iscsid shutting down. Sep 13 00:56:23.078096 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:56:23.096422 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:56:23.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.096569 systemd[1]: Finished ignition-quench.service. Sep 13 00:56:23.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.153059 systemd[1]: Reached target ignition-complete.target. Sep 13 00:56:23.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.211140 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:56:24.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.248248 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:56:24.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.028906 ignition[864]: INFO : Ignition 2.14.0 Sep 13 00:56:24.028906 ignition[864]: INFO : Stage: umount Sep 13 00:56:24.028906 ignition[864]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:56:24.028906 ignition[864]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Sep 13 00:56:24.028906 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:56:24.028906 ignition[864]: INFO : umount: umount passed Sep 13 00:56:24.028906 ignition[864]: INFO : Ignition finished successfully Sep 13 00:56:24.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.248368 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:56:23.254182 systemd[1]: Reached target initrd-fs.target. Sep 13 00:56:23.315042 systemd[1]: Reached target initrd.target. Sep 13 00:56:23.338089 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:56:23.339412 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:56:23.356285 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:56:24.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.375493 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:56:24.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.434361 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:56:23.447007 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:56:24.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.468079 systemd[1]: Stopped target timers.target. Sep 13 00:56:24.314816 kernel: kauditd_printk_skb: 18 callbacks suppressed Sep 13 00:56:24.314861 kernel: audit: type=1130 audit(1757724984.261:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.314888 kernel: audit: type=1131 audit(1757724984.261:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.486017 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:56:23.486220 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:56:23.513274 systemd[1]: Stopped target initrd.target. Sep 13 00:56:23.527019 systemd[1]: Stopped target basic.target. Sep 13 00:56:23.554052 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:56:24.406948 kernel: audit: type=1131 audit(1757724984.371:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.565142 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:56:24.449808 kernel: audit: type=1131 audit(1757724984.413:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.449853 kernel: audit: type=1334 audit(1757724984.434:66): prog-id=6 op=UNLOAD Sep 13 00:56:24.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.434000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:56:23.583147 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:56:23.605199 systemd[1]: Stopped target remote-fs.target. Sep 13 00:56:23.629194 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:56:24.515864 kernel: audit: type=1131 audit(1757724984.485:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.646203 systemd[1]: Stopped target sysinit.target. Sep 13 00:56:24.550813 kernel: audit: type=1131 audit(1757724984.522:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.664165 systemd[1]: Stopped target local-fs.target. Sep 13 00:56:24.585821 kernel: audit: type=1131 audit(1757724984.557:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.682169 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:56:23.702154 systemd[1]: Stopped target swap.target. Sep 13 00:56:24.633814 kernel: audit: type=1131 audit(1757724984.598:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.740052 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:56:23.740250 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:56:23.753328 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:56:24.699822 kernel: audit: type=1131 audit(1757724984.671:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.793060 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:56:24.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.793277 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:56:24.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.808345 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:56:23.808691 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:56:23.829204 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:56:24.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.829469 systemd[1]: Stopped ignition-files.service. Sep 13 00:56:24.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.845648 systemd[1]: Stopping ignition-mount.service... Sep 13 00:56:24.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:24.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:23.876091 systemd[1]: Stopping iscsid.service... Sep 13 00:56:23.899855 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:56:23.900124 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:56:24.838000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:56:24.838000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:56:23.923496 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:56:24.842000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:56:24.842000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:56:24.842000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:56:23.943898 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:56:23.944147 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:56:24.872780 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Sep 13 00:56:23.960160 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:56:23.960348 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:56:23.980065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:56:23.981032 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:56:23.981157 systemd[1]: Stopped iscsid.service. Sep 13 00:56:23.990579 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:56:23.990717 systemd[1]: Stopped ignition-mount.service. Sep 13 00:56:24.006634 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:56:24.006750 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:56:24.021727 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:56:24.021864 systemd[1]: Stopped ignition-disks.service. Sep 13 00:56:24.036864 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:56:24.036954 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:56:24.044019 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:56:24.044084 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:56:24.073971 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:56:24.074053 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:56:24.093019 systemd[1]: Stopped target paths.target. Sep 13 00:56:24.110032 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:56:24.113728 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:56:24.137887 systemd[1]: Stopped target slices.target. Sep 13 00:56:24.151809 systemd[1]: Stopped target sockets.target. Sep 13 00:56:24.166929 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:56:24.166992 systemd[1]: Closed iscsid.socket. Sep 13 00:56:24.173984 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:56:24.174054 systemd[1]: Stopped ignition-setup.service. Sep 13 00:56:24.205117 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:56:24.205301 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:56:24.220134 systemd[1]: Stopping iscsiuio.service... Sep 13 00:56:24.234380 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:56:24.234504 systemd[1]: Stopped iscsiuio.service. Sep 13 00:56:24.241326 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:56:24.241449 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:56:24.264304 systemd[1]: Stopped target network.target. Sep 13 00:56:24.322824 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:56:24.322914 systemd[1]: Closed iscsiuio.socket. Sep 13 00:56:24.338021 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:56:24.341681 systemd-networkd[682]: eth0: DHCPv6 lease lost Sep 13 00:56:24.882000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:56:24.345091 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:56:24.366267 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:56:24.366412 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:56:24.394828 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:56:24.395048 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:56:24.415213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:56:24.415273 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:56:24.458870 systemd[1]: Stopping network-cleanup.service... Sep 13 00:56:24.465902 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:56:24.465989 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:56:24.486939 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:56:24.487050 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:56:24.545391 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:56:24.545463 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:56:24.559123 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:56:24.594567 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:56:24.595231 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:56:24.595400 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:56:24.624031 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:56:24.624118 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:56:24.641985 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:56:24.642043 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:56:24.657911 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:56:24.657997 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:56:24.673076 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:56:24.673178 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:56:24.709213 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:56:24.709305 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:56:24.727078 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:56:24.750783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:56:24.750928 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:56:24.767443 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:56:24.767585 systemd[1]: Stopped network-cleanup.service. Sep 13 00:56:24.783220 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:56:24.783335 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:56:24.798078 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:56:24.814881 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:56:24.837255 systemd[1]: Switching root. Sep 13 00:56:24.886711 systemd-journald[189]: Journal stopped Sep 13 00:56:29.568135 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:56:29.568269 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:56:29.568301 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:56:29.568335 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:56:29.568364 kernel: SELinux: policy capability open_perms=1 Sep 13 00:56:29.568391 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:56:29.568414 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:56:29.568438 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:56:29.568460 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:56:29.568483 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:56:29.568505 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:56:29.568533 systemd[1]: Successfully loaded SELinux policy in 116.941ms. Sep 13 00:56:29.568585 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.824ms. Sep 13 00:56:29.568625 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:56:29.568654 systemd[1]: Detected virtualization kvm. Sep 13 00:56:29.568678 systemd[1]: Detected architecture x86-64. Sep 13 00:56:29.568700 systemd[1]: Detected first boot. Sep 13 00:56:29.568723 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:56:29.568747 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:56:29.568771 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:56:29.568796 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:56:29.568829 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:56:29.568863 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:56:29.568895 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:56:29.568918 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 00:56:29.568942 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:56:29.568967 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:56:29.568992 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:56:29.569015 systemd[1]: Created slice system-getty.slice. Sep 13 00:56:29.569043 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:56:29.569067 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:56:29.569099 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:56:29.569123 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:56:29.569148 systemd[1]: Created slice user.slice. Sep 13 00:56:29.569171 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:56:29.569194 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:56:29.569218 systemd[1]: Set up automount boot.automount. Sep 13 00:56:29.569242 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:56:29.569275 systemd[1]: Reached target integritysetup.target. Sep 13 00:56:29.569299 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:56:29.569324 systemd[1]: Reached target remote-fs.target. Sep 13 00:56:29.569349 systemd[1]: Reached target slices.target. Sep 13 00:56:29.569372 systemd[1]: Reached target swap.target. Sep 13 00:56:29.569397 systemd[1]: Reached target torcx.target. Sep 13 00:56:29.569422 systemd[1]: Reached target veritysetup.target. Sep 13 00:56:29.569445 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:56:29.569474 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:56:29.569497 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:56:29.569520 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:56:29.569544 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:56:29.569570 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:56:29.569594 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:56:29.569630 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:56:29.569655 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:56:29.569679 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:56:29.569704 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:56:29.569733 systemd[1]: Mounting media.mount... Sep 13 00:56:29.569758 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:29.569782 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:56:29.569806 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:56:29.569829 systemd[1]: Mounting tmp.mount... Sep 13 00:56:29.569860 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:56:29.569884 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:29.569909 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:56:29.569934 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:56:29.569961 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:56:29.569985 systemd[1]: Starting modprobe@drm.service... Sep 13 00:56:29.570009 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:56:29.570033 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:56:29.570058 systemd[1]: Starting modprobe@loop.service... Sep 13 00:56:29.570081 kernel: fuse: init (API version 7.34) Sep 13 00:56:29.570104 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:56:29.570128 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:56:29.570152 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:56:29.570178 kernel: loop: module loaded Sep 13 00:56:29.570202 systemd[1]: Starting systemd-journald.service... Sep 13 00:56:29.570226 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:56:29.570250 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:56:29.570273 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 13 00:56:29.570296 kernel: audit: type=1305 audit(1757724989.542:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:56:29.570321 kernel: audit: type=1300 audit(1757724989.542:90): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe5d22cf50 a2=4000 a3=7ffe5d22cfec items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:29.570351 systemd-journald[1026]: Journal started Sep 13 00:56:29.570450 systemd-journald[1026]: Runtime Journal (/run/log/journal/79e37f81270ac927595ebe628b239327) is 8.0M, max 148.8M, 140.8M free. Sep 13 00:56:29.108000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:56:29.108000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:56:29.542000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:56:29.542000 audit[1026]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe5d22cf50 a2=4000 a3=7ffe5d22cfec items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:29.542000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:56:29.611081 kernel: audit: type=1327 audit(1757724989.542:90): proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:56:29.611183 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:56:29.632653 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:56:29.651641 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:29.661654 systemd[1]: Started systemd-journald.service. Sep 13 00:56:29.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.670974 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:56:29.695646 kernel: audit: type=1130 audit(1757724989.667:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.700010 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:56:29.706876 systemd[1]: Mounted media.mount. Sep 13 00:56:29.713946 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:56:29.722857 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:56:29.731927 systemd[1]: Mounted tmp.mount. Sep 13 00:56:29.739182 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:56:29.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.748346 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:56:29.770649 kernel: audit: type=1130 audit(1757724989.746:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.779272 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:56:29.779544 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:56:29.801725 kernel: audit: type=1130 audit(1757724989.777:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.810426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:56:29.810734 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:56:29.854370 kernel: audit: type=1130 audit(1757724989.808:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.854630 kernel: audit: type=1131 audit(1757724989.808:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.863373 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:56:29.863664 systemd[1]: Finished modprobe@drm.service. Sep 13 00:56:29.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.907010 kernel: audit: type=1130 audit(1757724989.861:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.907256 kernel: audit: type=1131 audit(1757724989.861:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.916247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:56:29.916489 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:56:29.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.925263 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:56:29.925503 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:56:29.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.934183 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:56:29.934511 systemd[1]: Finished modprobe@loop.service. Sep 13 00:56:29.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.943321 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:56:29.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.952253 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:56:29.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.962295 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:56:29.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.971283 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:56:29.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:29.980360 systemd[1]: Reached target network-pre.target. Sep 13 00:56:29.990316 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:56:30.000310 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:56:30.007808 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:56:30.011178 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:56:30.020569 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:56:30.029379 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:56:30.031892 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:56:30.038827 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:56:30.040889 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:56:30.044315 systemd-journald[1026]: Time spent on flushing to /var/log/journal/79e37f81270ac927595ebe628b239327 is 67.107ms for 1109 entries. Sep 13 00:56:30.044315 systemd-journald[1026]: System Journal (/var/log/journal/79e37f81270ac927595ebe628b239327) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:56:30.131092 systemd-journald[1026]: Received client request to flush runtime journal. Sep 13 00:56:30.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:30.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:30.057946 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:56:30.066815 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:56:30.078741 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:56:30.133557 udevadm[1048]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:56:30.087967 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:56:30.097259 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:56:30.106464 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:56:30.118216 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:56:30.132498 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:56:30.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:30.149316 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:56:30.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:30.160076 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:56:30.226550 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:56:30.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:30.754515 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:56:30.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:30.765815 systemd[1]: Starting systemd-udevd.service... Sep 13 00:56:30.795560 systemd-udevd[1059]: Using default interface naming scheme 'v252'. Sep 13 00:56:30.850472 systemd[1]: Started systemd-udevd.service. Sep 13 00:56:30.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:30.865824 systemd[1]: Starting systemd-networkd.service... Sep 13 00:56:30.880161 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:56:30.946297 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:56:30.959148 systemd[1]: Started systemd-userdbd.service. Sep 13 00:56:30.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.063656 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:56:31.096813 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:56:31.105633 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 13 00:56:31.116635 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:56:31.133276 systemd-networkd[1071]: lo: Link UP Sep 13 00:56:31.133297 systemd-networkd[1071]: lo: Gained carrier Sep 13 00:56:31.134185 systemd-networkd[1071]: Enumeration completed Sep 13 00:56:31.134388 systemd[1]: Started systemd-networkd.service. Sep 13 00:56:31.134851 systemd-networkd[1071]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:56:31.137193 systemd-networkd[1071]: eth0: Link UP Sep 13 00:56:31.137212 systemd-networkd[1071]: eth0: Gained carrier Sep 13 00:56:31.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.161835 systemd-networkd[1071]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4.c.flatcar-212911.internal' to 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:56:31.161868 systemd-networkd[1071]: eth0: DHCPv4 address 10.128.0.69/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 13 00:56:31.139000 audit[1067]: AVC avc: denied { confidentiality } for pid=1067 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:56:31.199646 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:56:31.139000 audit[1067]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55870fdc9600 a1=338ec a2=7f71a03c9bc5 a3=5 items=110 ppid=1059 pid=1067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:31.139000 audit: CWD cwd="/" Sep 13 00:56:31.139000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=1 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=2 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=3 name=(null) inode=14401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=4 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=5 name=(null) inode=14402 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=6 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=7 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=8 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=9 name=(null) inode=14404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=10 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=11 name=(null) inode=14405 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=12 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=13 name=(null) inode=14406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=14 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=15 name=(null) inode=14407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=16 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=17 name=(null) inode=14408 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=18 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=19 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=20 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=21 name=(null) inode=14410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=22 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=23 name=(null) inode=14411 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=24 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=25 name=(null) inode=14412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=26 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=27 name=(null) inode=14413 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=28 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=29 name=(null) inode=14414 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=30 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=31 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=32 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=33 name=(null) inode=14416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=34 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=35 name=(null) inode=14417 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=36 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=37 name=(null) inode=14418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=38 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=39 name=(null) inode=14419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=40 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=41 name=(null) inode=14420 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=42 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=43 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=44 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=45 name=(null) inode=14422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=46 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=47 name=(null) inode=14423 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=48 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=49 name=(null) inode=14424 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=50 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=51 name=(null) inode=14425 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=52 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=53 name=(null) inode=14426 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=55 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=56 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=57 name=(null) inode=14428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=58 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=59 name=(null) inode=14429 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=60 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=61 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=62 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=63 name=(null) inode=14431 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=64 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=65 name=(null) inode=14432 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=66 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=67 name=(null) inode=14433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=68 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=69 name=(null) inode=14434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=70 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=71 name=(null) inode=14435 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=72 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=73 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=74 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=75 name=(null) inode=14437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=76 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=77 name=(null) inode=14438 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=78 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=79 name=(null) inode=14439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=80 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=81 name=(null) inode=14440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=82 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=83 name=(null) inode=14441 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=84 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=85 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=86 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=87 name=(null) inode=14443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=88 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=89 name=(null) inode=14444 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=90 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=91 name=(null) inode=14445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=92 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=93 name=(null) inode=14446 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=94 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=95 name=(null) inode=14447 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=96 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=97 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=98 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=99 name=(null) inode=14449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=100 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=101 name=(null) inode=14450 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=102 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=103 name=(null) inode=14451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=104 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=105 name=(null) inode=14452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=106 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=107 name=(null) inode=14453 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PATH item=109 name=(null) inode=13988 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:31.139000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:56:31.277643 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:56:31.296634 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:56:31.316673 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:56:31.322331 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:56:31.333373 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:56:31.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.344749 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:56:31.377672 lvm[1097]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:56:31.412284 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:56:31.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.421187 systemd[1]: Reached target cryptsetup.target. Sep 13 00:56:31.431467 systemd[1]: Starting lvm2-activation.service... Sep 13 00:56:31.438270 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:56:31.464325 systemd[1]: Finished lvm2-activation.service. Sep 13 00:56:31.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.473140 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:56:31.481814 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:56:31.481873 systemd[1]: Reached target local-fs.target. Sep 13 00:56:31.490862 systemd[1]: Reached target machines.target. Sep 13 00:56:31.501530 systemd[1]: Starting ldconfig.service... Sep 13 00:56:31.510086 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:31.510187 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:31.512152 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:56:31.521449 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:56:31.534776 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:56:31.537166 systemd[1]: Starting systemd-sysext.service... Sep 13 00:56:31.537881 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1102 (bootctl) Sep 13 00:56:31.541127 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:56:31.567467 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:56:31.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.569982 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:56:31.580936 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:56:31.581394 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:56:31.611661 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:56:31.712075 systemd-fsck[1114]: fsck.fat 4.2 (2021-01-31) Sep 13 00:56:31.712075 systemd-fsck[1114]: /dev/sda1: 790 files, 120761/258078 clusters Sep 13 00:56:31.716021 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:56:31.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.730190 systemd[1]: Mounting boot.mount... Sep 13 00:56:31.754834 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:56:31.756245 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:56:31.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.765080 systemd[1]: Mounted boot.mount. Sep 13 00:56:31.789193 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:56:31.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.812637 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:56:31.846869 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:56:31.868556 (sd-sysext)[1125]: Using extensions 'kubernetes'. Sep 13 00:56:31.869723 (sd-sysext)[1125]: Merged extensions into '/usr'. Sep 13 00:56:31.900185 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:31.902516 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:56:31.910630 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:31.915524 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:56:31.926311 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:56:31.936475 systemd[1]: Starting modprobe@loop.service... Sep 13 00:56:31.943871 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:31.944122 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:31.944347 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:31.949839 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:56:31.957245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:56:31.957570 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:56:31.966497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:56:31.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.966814 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:56:31.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.976589 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:56:31.976882 systemd[1]: Finished modprobe@loop.service. Sep 13 00:56:31.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:31.986696 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:56:31.986898 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:56:31.989908 systemd[1]: Finished systemd-sysext.service. Sep 13 00:56:31.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:32.002521 systemd[1]: Starting ensure-sysext.service... Sep 13 00:56:32.013157 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:56:32.026502 systemd[1]: Reloading. Sep 13 00:56:32.036313 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:56:32.043432 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:56:32.051800 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:56:32.214572 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-09-13T00:56:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:56:32.214643 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-09-13T00:56:32Z" level=info msg="torcx already run" Sep 13 00:56:32.244132 ldconfig[1101]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:56:32.379143 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:56:32.379171 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:56:32.402683 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:56:32.489223 systemd[1]: Finished ldconfig.service. Sep 13 00:56:32.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:32.498767 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:56:32.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:32.514700 systemd[1]: Starting audit-rules.service... Sep 13 00:56:32.523819 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:56:32.534235 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 13 00:56:32.545250 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:56:32.556779 systemd[1]: Starting systemd-resolved.service... Sep 13 00:56:32.567040 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:56:32.576083 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:56:32.585000 audit[1237]: SYSTEM_BOOT pid=1237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:56:32.588979 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:56:32.597922 augenrules[1244]: No rules Sep 13 00:56:32.598479 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 13 00:56:32.598902 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 13 00:56:32.596000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:56:32.596000 audit[1244]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc62170700 a2=420 a3=0 items=0 ppid=1212 pid=1244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:32.596000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:56:32.608783 systemd[1]: Finished audit-rules.service. Sep 13 00:56:32.617642 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:56:32.636117 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:32.636752 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:32.640695 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:56:32.649858 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:56:32.658732 systemd[1]: Starting modprobe@loop.service... Sep 13 00:56:32.668080 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 13 00:56:32.676292 enable-oslogin[1258]: /etc/pam.d/sshd already exists. Not enabling OS Login Sep 13 00:56:32.676815 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:32.677081 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:32.679752 systemd[1]: Starting systemd-update-done.service... Sep 13 00:56:32.686756 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:56:32.686989 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:32.689889 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:56:32.699473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:56:32.699751 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:56:32.709570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:56:32.709871 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:56:32.719541 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:56:32.719839 systemd[1]: Finished modprobe@loop.service. Sep 13 00:56:32.728530 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 13 00:56:32.728918 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 13 00:56:32.738754 systemd[1]: Finished systemd-update-done.service. Sep 13 00:56:32.752095 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:32.752649 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:32.756139 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:56:32.765952 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:56:32.774844 systemd[1]: Starting modprobe@loop.service... Sep 13 00:56:32.784156 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 13 00:56:32.791007 enable-oslogin[1269]: /etc/pam.d/sshd already exists. Not enabling OS Login Sep 13 00:56:32.792820 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:32.793075 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:32.793286 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:56:32.793455 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:32.795763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:56:32.796039 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:56:32.807874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:56:32.808175 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:56:32.817498 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:56:32.817811 systemd[1]: Finished modprobe@loop.service. Sep 13 00:56:32.826557 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 13 00:56:32.826993 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 13 00:56:32.836596 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:56:32.836832 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:56:32.842792 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:56:32.844995 systemd-timesyncd[1233]: Contacted time server 169.254.169.254:123 (169.254.169.254). Sep 13 00:56:32.845587 systemd-timesyncd[1233]: Initial clock synchronization to Sat 2025-09-13 00:56:32.694391 UTC. Sep 13 00:56:32.851497 systemd-resolved[1228]: Positive Trust Anchors: Sep 13 00:56:32.851516 systemd-resolved[1228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:56:32.851591 systemd-resolved[1228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:56:32.852084 systemd[1]: Reached target time-set.target. Sep 13 00:56:32.860927 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:32.861409 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:32.863731 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:56:32.874150 systemd[1]: Starting modprobe@drm.service... Sep 13 00:56:32.887554 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:56:32.896631 systemd-resolved[1228]: Defaulting to hostname 'linux'. Sep 13 00:56:32.898103 systemd[1]: Starting modprobe@loop.service... Sep 13 00:56:32.908173 systemd[1]: Starting oem-gce-enable-oslogin.service... Sep 13 00:56:32.912909 enable-oslogin[1281]: /etc/pam.d/sshd already exists. Not enabling OS Login Sep 13 00:56:32.916938 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:32.917205 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:32.919824 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:56:32.926832 systemd-networkd[1071]: eth0: Gained IPv6LL Sep 13 00:56:32.928831 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:56:32.929093 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:32.931522 systemd[1]: Started systemd-resolved.service. Sep 13 00:56:32.940682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:56:32.940961 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:56:32.950360 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:56:32.950637 systemd[1]: Finished modprobe@drm.service. Sep 13 00:56:32.959350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:56:32.959621 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:56:32.968417 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:56:32.968729 systemd[1]: Finished modprobe@loop.service. Sep 13 00:56:32.977416 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Sep 13 00:56:32.977778 systemd[1]: Finished oem-gce-enable-oslogin.service. Sep 13 00:56:32.986508 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:56:32.997722 systemd[1]: Reached target network.target. Sep 13 00:56:33.005868 systemd[1]: Reached target network-online.target. Sep 13 00:56:33.014798 systemd[1]: Reached target nss-lookup.target. Sep 13 00:56:33.022866 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:56:33.022938 systemd[1]: Reached target sysinit.target. Sep 13 00:56:33.030902 systemd[1]: Started motdgen.path. Sep 13 00:56:33.037860 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:56:33.048052 systemd[1]: Started logrotate.timer. Sep 13 00:56:33.054896 systemd[1]: Started mdadm.timer. Sep 13 00:56:33.061787 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:56:33.069773 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:56:33.069836 systemd[1]: Reached target paths.target. Sep 13 00:56:33.076768 systemd[1]: Reached target timers.target. Sep 13 00:56:33.084642 systemd[1]: Listening on dbus.socket. Sep 13 00:56:33.093291 systemd[1]: Starting docker.socket... Sep 13 00:56:33.103346 systemd[1]: Listening on sshd.socket. Sep 13 00:56:33.110957 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:33.111064 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:56:33.112031 systemd[1]: Finished ensure-sysext.service. Sep 13 00:56:33.120995 systemd[1]: Listening on docker.socket. Sep 13 00:56:33.128802 systemd[1]: Reached target sockets.target. Sep 13 00:56:33.136769 systemd[1]: Reached target basic.target. Sep 13 00:56:33.144043 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:56:33.144126 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:56:33.144162 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:56:33.145815 systemd[1]: Starting containerd.service... Sep 13 00:56:33.155086 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:56:33.165812 systemd[1]: Starting dbus.service... Sep 13 00:56:33.173508 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:56:33.182260 systemd[1]: Starting extend-filesystems.service... Sep 13 00:56:33.188600 jq[1293]: false Sep 13 00:56:33.189784 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:56:33.192371 systemd[1]: Starting kubelet.service... Sep 13 00:56:33.199832 systemd[1]: Starting motdgen.service... Sep 13 00:56:33.207310 systemd[1]: Starting oem-gce.service... Sep 13 00:56:33.219324 systemd[1]: Starting prepare-helm.service... Sep 13 00:56:33.229179 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:56:33.237885 systemd[1]: Starting sshd-keygen.service... Sep 13 00:56:33.248497 systemd[1]: Starting systemd-logind.service... Sep 13 00:56:33.255780 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:33.256498 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 13 00:56:33.258799 systemd[1]: Starting update-engine.service... Sep 13 00:56:33.266391 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:56:33.272036 jq[1317]: true Sep 13 00:56:33.279888 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:56:33.280308 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:56:33.325932 jq[1324]: true Sep 13 00:56:33.335526 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:56:33.336020 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:56:33.345472 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:56:33.345889 systemd[1]: Finished motdgen.service. Sep 13 00:56:33.352144 mkfs.ext4[1328]: mke2fs 1.46.5 (30-Dec-2021) Sep 13 00:56:33.358634 mkfs.ext4[1328]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Sep 13 00:56:33.358634 mkfs.ext4[1328]: Creating filesystem with 262144 4k blocks and 65536 inodes Sep 13 00:56:33.358634 mkfs.ext4[1328]: Filesystem UUID: 9fc981cd-582c-4c95-88ca-5f17f115c927 Sep 13 00:56:33.358634 mkfs.ext4[1328]: Superblock backups stored on blocks: Sep 13 00:56:33.358634 mkfs.ext4[1328]: 32768, 98304, 163840, 229376 Sep 13 00:56:33.358634 mkfs.ext4[1328]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Sep 13 00:56:33.358634 mkfs.ext4[1328]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Sep 13 00:56:33.381240 mkfs.ext4[1328]: Creating journal (8192 blocks): done Sep 13 00:56:33.399304 mkfs.ext4[1328]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Sep 13 00:56:33.413485 extend-filesystems[1294]: Found loop1 Sep 13 00:56:33.422005 extend-filesystems[1294]: Found sda Sep 13 00:56:33.428836 extend-filesystems[1294]: Found sda1 Sep 13 00:56:33.428836 extend-filesystems[1294]: Found sda2 Sep 13 00:56:33.428836 extend-filesystems[1294]: Found sda3 Sep 13 00:56:33.428836 extend-filesystems[1294]: Found usr Sep 13 00:56:33.428836 extend-filesystems[1294]: Found sda4 Sep 13 00:56:33.428836 extend-filesystems[1294]: Found sda6 Sep 13 00:56:33.428836 extend-filesystems[1294]: Found sda7 Sep 13 00:56:33.428836 extend-filesystems[1294]: Found sda9 Sep 13 00:56:33.428836 extend-filesystems[1294]: Checking size of /dev/sda9 Sep 13 00:56:33.495840 extend-filesystems[1294]: Resized partition /dev/sda9 Sep 13 00:56:33.503836 tar[1323]: linux-amd64/helm Sep 13 00:56:33.504386 update_engine[1315]: I0913 00:56:33.503449 1315 main.cc:92] Flatcar Update Engine starting Sep 13 00:56:33.504766 umount[1354]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Sep 13 00:56:33.508819 extend-filesystems[1366]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:56:33.518687 dbus-daemon[1292]: [system] SELinux support is enabled Sep 13 00:56:33.519412 systemd[1]: Started dbus.service. Sep 13 00:56:33.530455 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:56:33.530507 systemd[1]: Reached target system-config.target. Sep 13 00:56:33.536657 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 13 00:56:33.536735 kernel: loop2: detected capacity change from 0 to 2097152 Sep 13 00:56:33.537882 dbus-daemon[1292]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1071 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:56:33.548785 update_engine[1315]: I0913 00:56:33.548601 1315 update_check_scheduler.cc:74] Next update check in 7m20s Sep 13 00:56:33.551827 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:56:33.551870 systemd[1]: Reached target user-config.target. Sep 13 00:56:33.565893 systemd[1]: Started update-engine.service. Sep 13 00:56:33.566438 dbus-daemon[1292]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:56:33.568478 bash[1368]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:56:33.575353 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:56:33.583648 env[1325]: time="2025-09-13T00:56:33.583169421Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:56:33.593455 systemd[1]: Started locksmithd.service. Sep 13 00:56:33.603874 systemd[1]: Starting systemd-hostnamed.service... Sep 13 00:56:33.654960 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 13 00:56:33.655092 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:56:33.695172 extend-filesystems[1366]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:56:33.695172 extend-filesystems[1366]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 13 00:56:33.695172 extend-filesystems[1366]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 13 00:56:33.745822 extend-filesystems[1294]: Resized filesystem in /dev/sda9 Sep 13 00:56:33.697457 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:56:33.697918 systemd[1]: Finished extend-filesystems.service. Sep 13 00:56:33.858299 systemd-logind[1310]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:56:33.862714 systemd-logind[1310]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:56:33.864726 systemd-logind[1310]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:56:33.882826 systemd-logind[1310]: New seat seat0. Sep 13 00:56:33.893385 env[1325]: time="2025-09-13T00:56:33.893324877Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:56:33.893831 env[1325]: time="2025-09-13T00:56:33.893799490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:33.896989 systemd[1]: Started systemd-logind.service. Sep 13 00:56:33.898103 dbus-daemon[1292]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:56:33.898908 dbus-daemon[1292]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1375 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:56:33.906180 systemd[1]: Started systemd-hostnamed.service. Sep 13 00:56:33.912302 env[1325]: time="2025-09-13T00:56:33.912241010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:56:33.917988 systemd[1]: Starting polkit.service... Sep 13 00:56:33.922547 env[1325]: time="2025-09-13T00:56:33.922497362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:33.924063 env[1325]: time="2025-09-13T00:56:33.924017083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:56:33.930715 env[1325]: time="2025-09-13T00:56:33.930667459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:33.931692 env[1325]: time="2025-09-13T00:56:33.931656678Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:56:33.931815 env[1325]: time="2025-09-13T00:56:33.931791434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:33.932053 env[1325]: time="2025-09-13T00:56:33.932027093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:33.932578 env[1325]: time="2025-09-13T00:56:33.932546407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:33.941553 env[1325]: time="2025-09-13T00:56:33.941495548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:56:33.943713 env[1325]: time="2025-09-13T00:56:33.943664341Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:56:33.944003 env[1325]: time="2025-09-13T00:56:33.943967676Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:56:33.944144 env[1325]: time="2025-09-13T00:56:33.944121604Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:56:33.952744 coreos-metadata[1291]: Sep 13 00:56:33.952 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 13 00:56:33.953738 env[1325]: time="2025-09-13T00:56:33.953676202Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:56:33.953969 env[1325]: time="2025-09-13T00:56:33.953941820Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:56:33.954186 env[1325]: time="2025-09-13T00:56:33.954160883Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:56:33.954654 env[1325]: time="2025-09-13T00:56:33.954625899Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:56:33.954849 env[1325]: time="2025-09-13T00:56:33.954822507Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:56:33.955137 env[1325]: time="2025-09-13T00:56:33.955064285Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:56:33.955319 env[1325]: time="2025-09-13T00:56:33.955293944Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:56:33.955642 env[1325]: time="2025-09-13T00:56:33.955577517Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:56:33.955843 env[1325]: time="2025-09-13T00:56:33.955748128Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:56:33.956399 env[1325]: time="2025-09-13T00:56:33.956122546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:56:33.957106 env[1325]: time="2025-09-13T00:56:33.957074924Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:56:33.957285 env[1325]: time="2025-09-13T00:56:33.957252098Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:56:33.957654 env[1325]: time="2025-09-13T00:56:33.957629087Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:56:33.961309 coreos-metadata[1291]: Sep 13 00:56:33.961 INFO Fetch failed with 404: resource not found Sep 13 00:56:33.961440 coreos-metadata[1291]: Sep 13 00:56:33.961 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 13 00:56:33.961830 coreos-metadata[1291]: Sep 13 00:56:33.961 INFO Fetch successful Sep 13 00:56:33.961942 coreos-metadata[1291]: Sep 13 00:56:33.961 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 13 00:56:33.962317 coreos-metadata[1291]: Sep 13 00:56:33.962 INFO Fetch failed with 404: resource not found Sep 13 00:56:33.962409 coreos-metadata[1291]: Sep 13 00:56:33.962 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 13 00:56:33.962737 coreos-metadata[1291]: Sep 13 00:56:33.962 INFO Fetch failed with 404: resource not found Sep 13 00:56:33.962833 coreos-metadata[1291]: Sep 13 00:56:33.962 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 13 00:56:33.963387 coreos-metadata[1291]: Sep 13 00:56:33.963 INFO Fetch successful Sep 13 00:56:33.964807 env[1325]: time="2025-09-13T00:56:33.964760651Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:56:33.965895 env[1325]: time="2025-09-13T00:56:33.965848577Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:56:33.970800 env[1325]: time="2025-09-13T00:56:33.970703936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.971029 env[1325]: time="2025-09-13T00:56:33.971000723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:56:33.971394 env[1325]: time="2025-09-13T00:56:33.971259869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.973724 env[1325]: time="2025-09-13T00:56:33.973684527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.973876 env[1325]: time="2025-09-13T00:56:33.973852310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.974024 env[1325]: time="2025-09-13T00:56:33.974001984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.974158 env[1325]: time="2025-09-13T00:56:33.974136125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.974303 env[1325]: time="2025-09-13T00:56:33.974269541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.974447 env[1325]: time="2025-09-13T00:56:33.974425778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.974579 env[1325]: time="2025-09-13T00:56:33.974556068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.974787 env[1325]: time="2025-09-13T00:56:33.974766567Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:56:33.975166 env[1325]: time="2025-09-13T00:56:33.975138624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.975323 env[1325]: time="2025-09-13T00:56:33.975299670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.975469 env[1325]: time="2025-09-13T00:56:33.975446393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.975594 env[1325]: time="2025-09-13T00:56:33.975571951Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:56:33.975760 env[1325]: time="2025-09-13T00:56:33.975731672Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:56:33.975882 env[1325]: time="2025-09-13T00:56:33.975860301Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:56:33.976023 env[1325]: time="2025-09-13T00:56:33.975999722Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:56:33.976197 env[1325]: time="2025-09-13T00:56:33.976165503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:56:33.976853 env[1325]: time="2025-09-13T00:56:33.976743179Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:56:33.980740 env[1325]: time="2025-09-13T00:56:33.979689383Z" level=info msg="Connect containerd service" Sep 13 00:56:33.980878 env[1325]: time="2025-09-13T00:56:33.980849944Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:56:33.984534 env[1325]: time="2025-09-13T00:56:33.984479747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:56:33.986502 unknown[1291]: wrote ssh authorized keys file for user: core Sep 13 00:56:33.990034 env[1325]: time="2025-09-13T00:56:33.989997611Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:56:33.991807 env[1325]: time="2025-09-13T00:56:33.991762236Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:56:33.992190 systemd[1]: Started containerd.service. Sep 13 00:56:33.992657 env[1325]: time="2025-09-13T00:56:33.992577708Z" level=info msg="containerd successfully booted in 0.415992s" Sep 13 00:56:34.019213 update-ssh-keys[1388]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:56:34.020520 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:56:34.056721 env[1325]: time="2025-09-13T00:56:33.995269985Z" level=info msg="Start subscribing containerd event" Sep 13 00:56:34.057027 env[1325]: time="2025-09-13T00:56:34.056992242Z" level=info msg="Start recovering state" Sep 13 00:56:34.089414 env[1325]: time="2025-09-13T00:56:34.089352855Z" level=info msg="Start event monitor" Sep 13 00:56:34.090697 env[1325]: time="2025-09-13T00:56:34.090653942Z" level=info msg="Start snapshots syncer" Sep 13 00:56:34.090885 env[1325]: time="2025-09-13T00:56:34.090851647Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:56:34.090983 env[1325]: time="2025-09-13T00:56:34.090960705Z" level=info msg="Start streaming server" Sep 13 00:56:34.111553 polkitd[1385]: Started polkitd version 121 Sep 13 00:56:34.142419 polkitd[1385]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:56:34.149981 polkitd[1385]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:56:34.152647 polkitd[1385]: Finished loading, compiling and executing 2 rules Sep 13 00:56:34.153756 systemd[1]: Started polkit.service. Sep 13 00:56:34.153328 dbus-daemon[1292]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:56:34.153733 polkitd[1385]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:56:34.209139 systemd-hostnamed[1375]: Hostname set to (transient) Sep 13 00:56:34.213630 systemd-resolved[1228]: System hostname changed to 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4'. Sep 13 00:56:35.060064 tar[1323]: linux-amd64/LICENSE Sep 13 00:56:35.063452 tar[1323]: linux-amd64/README.md Sep 13 00:56:35.082217 systemd[1]: Finished prepare-helm.service. Sep 13 00:56:35.883558 systemd[1]: Started kubelet.service. Sep 13 00:56:36.327715 locksmithd[1373]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:56:37.267963 kubelet[1415]: E0913 00:56:37.267903 1415 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:56:37.270948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:56:37.271226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:56:39.728831 sshd_keygen[1336]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:56:39.771006 systemd[1]: Finished sshd-keygen.service. Sep 13 00:56:39.788454 systemd[1]: Starting issuegen.service... Sep 13 00:56:39.802782 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:56:39.803197 systemd[1]: Finished issuegen.service. Sep 13 00:56:39.814844 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:56:39.829059 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:56:39.840466 systemd[1]: Started getty@tty1.service. Sep 13 00:56:39.850083 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:56:39.860536 systemd[1]: Reached target getty.target. Sep 13 00:56:40.518131 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Sep 13 00:56:41.607054 systemd[1]: Created slice system-sshd.slice. Sep 13 00:56:41.618309 systemd[1]: Started sshd@0-10.128.0.69:22-139.178.68.195:41624.service. Sep 13 00:56:42.021653 sshd[1440]: Accepted publickey for core from 139.178.68.195 port 41624 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:56:42.026361 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:42.043828 systemd[1]: Created slice user-500.slice. Sep 13 00:56:42.052485 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:56:42.063265 systemd-logind[1310]: New session 1 of user core. Sep 13 00:56:42.071668 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:56:42.082319 systemd[1]: Starting user@500.service... Sep 13 00:56:42.105530 (systemd)[1445]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:42.233131 systemd[1445]: Queued start job for default target default.target. Sep 13 00:56:42.234193 systemd[1445]: Reached target paths.target. Sep 13 00:56:42.234235 systemd[1445]: Reached target sockets.target. Sep 13 00:56:42.234260 systemd[1445]: Reached target timers.target. Sep 13 00:56:42.234281 systemd[1445]: Reached target basic.target. Sep 13 00:56:42.234508 systemd[1]: Started user@500.service. Sep 13 00:56:42.234799 systemd[1445]: Reached target default.target. Sep 13 00:56:42.234868 systemd[1445]: Startup finished in 119ms. Sep 13 00:56:42.243565 systemd[1]: Started session-1.scope. Sep 13 00:56:42.532058 systemd[1]: Started sshd@1-10.128.0.69:22-139.178.68.195:41628.service. Sep 13 00:56:42.612638 kernel: loop2: detected capacity change from 0 to 2097152 Sep 13 00:56:42.637359 systemd-nspawn[1456]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Sep 13 00:56:42.637783 systemd-nspawn[1456]: Press ^] three times within 1s to kill container. Sep 13 00:56:42.652662 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:56:42.735513 systemd[1]: Started oem-gce.service. Sep 13 00:56:42.743299 systemd[1]: Reached target multi-user.target. Sep 13 00:56:42.754221 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:56:42.767588 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:56:42.768023 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:56:42.779264 systemd[1]: Startup finished in 11.398s (kernel) + 17.602s (userspace) = 29.000s. Sep 13 00:56:42.797389 systemd-nspawn[1456]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 13 00:56:42.797389 systemd-nspawn[1456]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 13 00:56:42.797800 systemd-nspawn[1456]: + /usr/bin/google_instance_setup Sep 13 00:56:42.921233 sshd[1454]: Accepted publickey for core from 139.178.68.195 port 41628 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:56:42.923674 sshd[1454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:42.932112 systemd[1]: Started session-2.scope. Sep 13 00:56:42.933968 systemd-logind[1310]: New session 2 of user core. Sep 13 00:56:43.199451 sshd[1454]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:43.203955 systemd[1]: sshd@1-10.128.0.69:22-139.178.68.195:41628.service: Deactivated successfully. Sep 13 00:56:43.205223 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:56:43.207039 systemd-logind[1310]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:56:43.208919 systemd-logind[1310]: Removed session 2. Sep 13 00:56:43.258437 systemd[1]: Started sshd@2-10.128.0.69:22-139.178.68.195:41634.service. Sep 13 00:56:43.439773 instance-setup[1464]: INFO Running google_set_multiqueue. Sep 13 00:56:43.456326 instance-setup[1464]: INFO Set channels for eth0 to 2. Sep 13 00:56:43.459916 instance-setup[1464]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 13 00:56:43.461326 instance-setup[1464]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 13 00:56:43.461887 instance-setup[1464]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 13 00:56:43.463099 instance-setup[1464]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 13 00:56:43.463512 instance-setup[1464]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 13 00:56:43.464889 instance-setup[1464]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 13 00:56:43.465343 instance-setup[1464]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 13 00:56:43.466838 instance-setup[1464]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 13 00:56:43.478873 instance-setup[1464]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 13 00:56:43.479225 instance-setup[1464]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 13 00:56:43.528592 systemd-nspawn[1456]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 13 00:56:43.647763 sshd[1472]: Accepted publickey for core from 139.178.68.195 port 41634 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:56:43.649385 sshd[1472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:43.657115 systemd-logind[1310]: New session 3 of user core. Sep 13 00:56:43.657879 systemd[1]: Started session-3.scope. Sep 13 00:56:43.886315 startup-script[1502]: INFO Starting startup scripts. Sep 13 00:56:43.898763 startup-script[1502]: INFO No startup scripts found in metadata. Sep 13 00:56:43.898929 startup-script[1502]: INFO Finished running startup scripts. Sep 13 00:56:43.917379 sshd[1472]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:43.921885 systemd[1]: sshd@2-10.128.0.69:22-139.178.68.195:41634.service: Deactivated successfully. Sep 13 00:56:43.923097 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:56:43.925720 systemd-logind[1310]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:56:43.928931 systemd-logind[1310]: Removed session 3. Sep 13 00:56:43.941251 systemd-nspawn[1456]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 13 00:56:43.941251 systemd-nspawn[1456]: + daemon_pids=() Sep 13 00:56:43.942029 systemd-nspawn[1456]: + for d in accounts clock_skew network Sep 13 00:56:43.942029 systemd-nspawn[1456]: + daemon_pids+=($!) Sep 13 00:56:43.942029 systemd-nspawn[1456]: + for d in accounts clock_skew network Sep 13 00:56:43.942029 systemd-nspawn[1456]: + daemon_pids+=($!) Sep 13 00:56:43.942029 systemd-nspawn[1456]: + for d in accounts clock_skew network Sep 13 00:56:43.942291 systemd-nspawn[1456]: + daemon_pids+=($!) Sep 13 00:56:43.942291 systemd-nspawn[1456]: + NOTIFY_SOCKET=/run/systemd/notify Sep 13 00:56:43.942291 systemd-nspawn[1456]: + /usr/bin/systemd-notify --ready Sep 13 00:56:43.942767 systemd-nspawn[1456]: + /usr/bin/google_network_daemon Sep 13 00:56:43.943114 systemd-nspawn[1456]: + /usr/bin/google_clock_skew_daemon Sep 13 00:56:43.945686 systemd-nspawn[1456]: + /usr/bin/google_accounts_daemon Sep 13 00:56:43.975235 systemd[1]: Started sshd@3-10.128.0.69:22-139.178.68.195:41638.service. Sep 13 00:56:44.021416 systemd-nspawn[1456]: + wait -n 36 37 38 Sep 13 00:56:44.375836 sshd[1514]: Accepted publickey for core from 139.178.68.195 port 41638 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:56:44.376884 sshd[1514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:44.385159 systemd[1]: Started session-4.scope. Sep 13 00:56:44.386690 systemd-logind[1310]: New session 4 of user core. Sep 13 00:56:44.602511 google-clock-skew[1511]: INFO Starting Google Clock Skew daemon. Sep 13 00:56:44.627054 google-clock-skew[1511]: INFO Clock drift token has changed: 0. Sep 13 00:56:44.637482 systemd-nspawn[1456]: hwclock: Cannot access the Hardware Clock via any known method. Sep 13 00:56:44.637717 systemd-nspawn[1456]: hwclock: Use the --verbose option to see the details of our search for an access method. Sep 13 00:56:44.638386 google-clock-skew[1511]: WARNING Failed to sync system time with hardware clock. Sep 13 00:56:44.653262 google-networking[1512]: INFO Starting Google Networking daemon. Sep 13 00:56:44.658546 sshd[1514]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:44.662968 systemd[1]: sshd@3-10.128.0.69:22-139.178.68.195:41638.service: Deactivated successfully. Sep 13 00:56:44.664208 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:56:44.667044 systemd-logind[1310]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:56:44.668768 systemd-logind[1310]: Removed session 4. Sep 13 00:56:44.715191 systemd[1]: Started sshd@4-10.128.0.69:22-139.178.68.195:41648.service. Sep 13 00:56:44.787349 groupadd[1531]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 13 00:56:44.791389 groupadd[1531]: group added to /etc/gshadow: name=google-sudoers Sep 13 00:56:44.796315 groupadd[1531]: new group: name=google-sudoers, GID=1000 Sep 13 00:56:44.809734 google-accounts[1510]: INFO Starting Google Accounts daemon. Sep 13 00:56:44.835333 google-accounts[1510]: WARNING OS Login not installed. Sep 13 00:56:44.836440 google-accounts[1510]: INFO Creating a new user account for 0. Sep 13 00:56:44.845127 systemd-nspawn[1456]: useradd: invalid user name '0': use --badname to ignore Sep 13 00:56:44.846154 google-accounts[1510]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 13 00:56:45.096328 sshd[1529]: Accepted publickey for core from 139.178.68.195 port 41648 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:56:45.097505 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:45.103627 systemd-logind[1310]: New session 5 of user core. Sep 13 00:56:45.104348 systemd[1]: Started session-5.scope. Sep 13 00:56:45.334703 sudo[1543]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:56:45.335151 sudo[1543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:56:45.344857 dbus-daemon[1292]: \xd0=.\xd8$V: received setenforce notice (enforcing=1460124608) Sep 13 00:56:45.347095 sudo[1543]: pam_unix(sudo:session): session closed for user root Sep 13 00:56:45.405772 sshd[1529]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:45.410729 systemd[1]: sshd@4-10.128.0.69:22-139.178.68.195:41648.service: Deactivated successfully. Sep 13 00:56:45.412473 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:56:45.413065 systemd-logind[1310]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:56:45.414532 systemd-logind[1310]: Removed session 5. Sep 13 00:56:45.458902 systemd[1]: Started sshd@5-10.128.0.69:22-139.178.68.195:41664.service. Sep 13 00:56:45.818505 sshd[1547]: Accepted publickey for core from 139.178.68.195 port 41664 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:56:45.820531 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:45.827287 systemd[1]: Started session-6.scope. Sep 13 00:56:45.827815 systemd-logind[1310]: New session 6 of user core. Sep 13 00:56:46.031835 sudo[1552]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:56:46.032274 sudo[1552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:56:46.036653 sudo[1552]: pam_unix(sudo:session): session closed for user root Sep 13 00:56:46.049125 sudo[1551]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:56:46.049546 sudo[1551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:56:46.062400 systemd[1]: Stopping audit-rules.service... Sep 13 00:56:46.064000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:56:46.070168 kernel: kauditd_printk_skb: 155 callbacks suppressed Sep 13 00:56:46.070266 kernel: audit: type=1305 audit(1757725006.064:138): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:56:46.070351 auditctl[1555]: No rules Sep 13 00:56:46.071404 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:56:46.071793 systemd[1]: Stopped audit-rules.service. Sep 13 00:56:46.075333 systemd[1]: Starting audit-rules.service... Sep 13 00:56:46.064000 audit[1555]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc2db7d550 a2=420 a3=0 items=0 ppid=1 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:46.064000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:56:46.127527 kernel: audit: type=1300 audit(1757725006.064:138): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc2db7d550 a2=420 a3=0 items=0 ppid=1 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:46.127649 kernel: audit: type=1327 audit(1757725006.064:138): proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:56:46.127688 kernel: audit: type=1131 audit(1757725006.071:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.128085 augenrules[1573]: No rules Sep 13 00:56:46.129831 systemd[1]: Finished audit-rules.service. Sep 13 00:56:46.141067 sudo[1551]: pam_unix(sudo:session): session closed for user root Sep 13 00:56:46.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.149680 kernel: audit: type=1130 audit(1757725006.127:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.137000 audit[1551]: USER_END pid=1551 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.195986 kernel: audit: type=1106 audit(1757725006.137:141): pid=1551 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.217383 kernel: audit: type=1104 audit(1757725006.137:142): pid=1551 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.137000 audit[1551]: CRED_DISP pid=1551 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.203951 systemd-logind[1310]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:56:46.199040 sshd[1547]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:46.206111 systemd[1]: sshd@5-10.128.0.69:22-139.178.68.195:41664.service: Deactivated successfully. Sep 13 00:56:46.207324 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:56:46.209211 systemd-logind[1310]: Removed session 6. Sep 13 00:56:46.194000 audit[1547]: USER_END pid=1547 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:56:46.251070 kernel: audit: type=1106 audit(1757725006.194:143): pid=1547 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:56:46.251204 kernel: audit: type=1104 audit(1757725006.194:144): pid=1547 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:56:46.194000 audit[1547]: CRED_DISP pid=1547 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:56:46.299500 kernel: audit: type=1131 audit(1757725006.204:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.69:22-139.178.68.195:41664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.69:22-139.178.68.195:41664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.281484 systemd[1]: Started sshd@6-10.128.0.69:22-139.178.68.195:41676.service. Sep 13 00:56:46.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.69:22-139.178.68.195:41676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.664000 audit[1580]: USER_ACCT pid=1580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:56:46.664932 sshd[1580]: Accepted publickey for core from 139.178.68.195 port 41676 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:56:46.665000 audit[1580]: CRED_ACQ pid=1580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:56:46.665000 audit[1580]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef2494240 a2=3 a3=0 items=0 ppid=1 pid=1580 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:46.665000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:46.666864 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:46.673473 systemd[1]: Started session-7.scope. Sep 13 00:56:46.674013 systemd-logind[1310]: New session 7 of user core. Sep 13 00:56:46.683000 audit[1580]: USER_START pid=1580 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:56:46.686000 audit[1583]: CRED_ACQ pid=1583 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:56:46.885000 audit[1584]: USER_ACCT pid=1584 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.887244 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:56:46.885000 audit[1584]: CRED_REFR pid=1584 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.887750 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:56:46.889000 audit[1584]: USER_START pid=1584 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:56:46.921176 systemd[1]: Starting docker.service... Sep 13 00:56:46.970010 env[1594]: time="2025-09-13T00:56:46.969958625Z" level=info msg="Starting up" Sep 13 00:56:46.972119 env[1594]: time="2025-09-13T00:56:46.972082094Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:56:46.972119 env[1594]: time="2025-09-13T00:56:46.972114361Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:56:46.972302 env[1594]: time="2025-09-13T00:56:46.972148301Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:56:46.972302 env[1594]: time="2025-09-13T00:56:46.972184684Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:56:46.974347 env[1594]: time="2025-09-13T00:56:46.974295691Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:56:46.974347 env[1594]: time="2025-09-13T00:56:46.974320862Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:56:46.974347 env[1594]: time="2025-09-13T00:56:46.974344792Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:56:46.974560 env[1594]: time="2025-09-13T00:56:46.974360814Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:56:46.985676 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3801335950-merged.mount: Deactivated successfully. Sep 13 00:56:47.365487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:56:47.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:47.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:47.365861 systemd[1]: Stopped kubelet.service. Sep 13 00:56:47.368404 systemd[1]: Starting kubelet.service... Sep 13 00:56:47.686382 env[1594]: time="2025-09-13T00:56:47.686236033Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:56:47.686749 env[1594]: time="2025-09-13T00:56:47.686717886Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:56:47.687349 env[1594]: time="2025-09-13T00:56:47.687298673Z" level=info msg="Loading containers: start." Sep 13 00:56:47.788000 audit[1627]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.788000 audit[1627]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc1bce77d0 a2=0 a3=7ffc1bce77bc items=0 ppid=1594 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.788000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 00:56:47.790000 audit[1629]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.790000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc2756d180 a2=0 a3=7ffc2756d16c items=0 ppid=1594 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.790000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 00:56:47.793000 audit[1631]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.793000 audit[1631]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0d76fb90 a2=0 a3=7ffc0d76fb7c items=0 ppid=1594 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.793000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:56:47.796000 audit[1633]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.796000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff3160bb60 a2=0 a3=7fff3160bb4c items=0 ppid=1594 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.796000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:56:47.800000 audit[1635]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.800000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd65ff2c70 a2=0 a3=7ffd65ff2c5c items=0 ppid=1594 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.800000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 00:56:47.822000 audit[1640]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.822000 audit[1640]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdec7f17f0 a2=0 a3=7ffdec7f17dc items=0 ppid=1594 pid=1640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.822000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 00:56:47.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:47.915318 systemd[1]: Started kubelet.service. Sep 13 00:56:47.919000 audit[1649]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.919000 audit[1649]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd10c6af10 a2=0 a3=7ffd10c6aefc items=0 ppid=1594 pid=1649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.919000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 00:56:47.924000 audit[1651]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.924000 audit[1651]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe45783a10 a2=0 a3=7ffe457839fc items=0 ppid=1594 pid=1651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.924000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 00:56:47.931000 audit[1653]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.931000 audit[1653]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffde4faea30 a2=0 a3=7ffde4faea1c items=0 ppid=1594 pid=1653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.931000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:56:47.944000 audit[1662]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.944000 audit[1662]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc06322ce0 a2=0 a3=7ffc06322ccc items=0 ppid=1594 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.944000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:56:47.949000 audit[1663]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:47.949000 audit[1663]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc9981cfd0 a2=0 a3=7ffc9981cfbc items=0 ppid=1594 pid=1663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:47.949000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:56:47.970633 kernel: Initializing XFRM netlink socket Sep 13 00:56:48.012761 kubelet[1648]: E0913 00:56:48.012701 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:56:48.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:56:48.018746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:56:48.019048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:56:48.030651 env[1594]: time="2025-09-13T00:56:48.030588960Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:56:48.063000 audit[1672]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.063000 audit[1672]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff831c57d0 a2=0 a3=7fff831c57bc items=0 ppid=1594 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.063000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 00:56:48.077000 audit[1675]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1675 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.077000 audit[1675]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffee25fdca0 a2=0 a3=7ffee25fdc8c items=0 ppid=1594 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.077000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 00:56:48.082000 audit[1678]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.082000 audit[1678]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff6151c8a0 a2=0 a3=7fff6151c88c items=0 ppid=1594 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.082000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 00:56:48.085000 audit[1680]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.085000 audit[1680]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffef9d14210 a2=0 a3=7ffef9d141fc items=0 ppid=1594 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.085000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 00:56:48.088000 audit[1682]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.088000 audit[1682]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffdba4da600 a2=0 a3=7ffdba4da5ec items=0 ppid=1594 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.088000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 00:56:48.092000 audit[1684]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.092000 audit[1684]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffdc190eb30 a2=0 a3=7ffdc190eb1c items=0 ppid=1594 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.092000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 00:56:48.095000 audit[1686]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.095000 audit[1686]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe0f00f0c0 a2=0 a3=7ffe0f00f0ac items=0 ppid=1594 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.095000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 00:56:48.107000 audit[1689]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.107000 audit[1689]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe47baa8d0 a2=0 a3=7ffe47baa8bc items=0 ppid=1594 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.107000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 00:56:48.111000 audit[1691]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.111000 audit[1691]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd502c3b40 a2=0 a3=7ffd502c3b2c items=0 ppid=1594 pid=1691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.111000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:56:48.114000 audit[1693]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.114000 audit[1693]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffe619be60 a2=0 a3=7fffe619be4c items=0 ppid=1594 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.114000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:56:48.117000 audit[1695]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1695 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.117000 audit[1695]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdca2bef40 a2=0 a3=7ffdca2bef2c items=0 ppid=1594 pid=1695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.117000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 00:56:48.119158 systemd-networkd[1071]: docker0: Link UP Sep 13 00:56:48.129000 audit[1699]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.129000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcccf22880 a2=0 a3=7ffcccf2286c items=0 ppid=1594 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.129000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:56:48.135000 audit[1700]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:56:48.135000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc02de3f70 a2=0 a3=7ffc02de3f5c items=0 ppid=1594 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:48.135000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:56:48.137804 env[1594]: time="2025-09-13T00:56:48.137754640Z" level=info msg="Loading containers: done." Sep 13 00:56:48.158013 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1036481092-merged.mount: Deactivated successfully. Sep 13 00:56:48.164197 env[1594]: time="2025-09-13T00:56:48.164126595Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:56:48.164459 env[1594]: time="2025-09-13T00:56:48.164410817Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:56:48.164596 env[1594]: time="2025-09-13T00:56:48.164559258Z" level=info msg="Daemon has completed initialization" Sep 13 00:56:48.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:48.186217 systemd[1]: Started docker.service. Sep 13 00:56:48.201055 env[1594]: time="2025-09-13T00:56:48.198451227Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:56:49.171931 env[1325]: time="2025-09-13T00:56:49.171868408Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:56:49.730504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424410820.mount: Deactivated successfully. Sep 13 00:56:51.366253 env[1325]: time="2025-09-13T00:56:51.366181939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:51.369001 env[1325]: time="2025-09-13T00:56:51.368955784Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:51.371136 env[1325]: time="2025-09-13T00:56:51.371091171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:51.373424 env[1325]: time="2025-09-13T00:56:51.373381263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:51.374645 env[1325]: time="2025-09-13T00:56:51.374585125Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:56:51.375589 env[1325]: time="2025-09-13T00:56:51.375557120Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:56:52.868576 env[1325]: time="2025-09-13T00:56:52.868504359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:52.871324 env[1325]: time="2025-09-13T00:56:52.871272076Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:52.873788 env[1325]: time="2025-09-13T00:56:52.873743194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:52.876060 env[1325]: time="2025-09-13T00:56:52.876013060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:52.877164 env[1325]: time="2025-09-13T00:56:52.877111804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:56:52.878599 env[1325]: time="2025-09-13T00:56:52.878560753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:56:54.268878 env[1325]: time="2025-09-13T00:56:54.268800457Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:54.272088 env[1325]: time="2025-09-13T00:56:54.272033975Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:54.275070 env[1325]: time="2025-09-13T00:56:54.275004880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:54.278862 env[1325]: time="2025-09-13T00:56:54.278796469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:54.279635 env[1325]: time="2025-09-13T00:56:54.279575489Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:56:54.280666 env[1325]: time="2025-09-13T00:56:54.280601208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:56:55.321857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922060201.mount: Deactivated successfully. Sep 13 00:56:56.092078 env[1325]: time="2025-09-13T00:56:56.092000547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:56.094684 env[1325]: time="2025-09-13T00:56:56.094635179Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:56.096723 env[1325]: time="2025-09-13T00:56:56.096676422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:56.098592 env[1325]: time="2025-09-13T00:56:56.098554039Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:56.099209 env[1325]: time="2025-09-13T00:56:56.099155659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:56:56.100009 env[1325]: time="2025-09-13T00:56:56.099975798Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:56:56.508286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148757048.mount: Deactivated successfully. Sep 13 00:56:57.711635 env[1325]: time="2025-09-13T00:56:57.711550506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:57.714424 env[1325]: time="2025-09-13T00:56:57.714370727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:57.717030 env[1325]: time="2025-09-13T00:56:57.716979044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:57.719289 env[1325]: time="2025-09-13T00:56:57.719245124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:57.720440 env[1325]: time="2025-09-13T00:56:57.720386077Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:56:57.721218 env[1325]: time="2025-09-13T00:56:57.721184731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:56:58.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:58.086437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:56:58.086717 systemd[1]: Stopped kubelet.service. Sep 13 00:56:58.092180 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 13 00:56:58.092351 kernel: audit: type=1130 audit(1757725018.086:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:58.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:58.117744 systemd[1]: Starting kubelet.service... Sep 13 00:56:58.122081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431262909.mount: Deactivated successfully. Sep 13 00:56:58.135273 kernel: audit: type=1131 audit(1757725018.097:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:58.241841 env[1325]: time="2025-09-13T00:56:58.241752994Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:58.247458 env[1325]: time="2025-09-13T00:56:58.247257988Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:58.374179 env[1325]: time="2025-09-13T00:56:58.374122715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:58.379805 env[1325]: time="2025-09-13T00:56:58.378853476Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:58.379805 env[1325]: time="2025-09-13T00:56:58.379474850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:56:58.384972 env[1325]: time="2025-09-13T00:56:58.384921580Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:56:58.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:58.411080 systemd[1]: Started kubelet.service. Sep 13 00:56:58.433665 kernel: audit: type=1130 audit(1757725018.410:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:58.478696 kubelet[1741]: E0913 00:56:58.478646 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:56:58.481766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:56:58.482070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:56:58.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:56:58.504649 kernel: audit: type=1131 audit(1757725018.481:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:56:58.800768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938429276.mount: Deactivated successfully. Sep 13 00:57:01.365017 env[1325]: time="2025-09-13T00:57:01.364944223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:01.367947 env[1325]: time="2025-09-13T00:57:01.367896075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:01.370436 env[1325]: time="2025-09-13T00:57:01.370391492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:01.372870 env[1325]: time="2025-09-13T00:57:01.372824893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:01.373978 env[1325]: time="2025-09-13T00:57:01.373927811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:57:04.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:04.242268 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:57:04.265642 kernel: audit: type=1131 audit(1757725024.242:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:04.301854 systemd[1]: Stopped kubelet.service. Sep 13 00:57:04.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:04.305896 systemd[1]: Starting kubelet.service... Sep 13 00:57:04.324646 kernel: audit: type=1130 audit(1757725024.301:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:04.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:04.358487 kernel: audit: type=1131 audit(1757725024.301:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:04.375162 systemd[1]: Reloading. Sep 13 00:57:04.524815 /usr/lib/systemd/system-generators/torcx-generator[1797]: time="2025-09-13T00:57:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:57:04.529027 /usr/lib/systemd/system-generators/torcx-generator[1797]: time="2025-09-13T00:57:04Z" level=info msg="torcx already run" Sep 13 00:57:04.662884 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:57:04.662913 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:57:04.686993 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:57:04.814818 systemd[1]: Started kubelet.service. Sep 13 00:57:04.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:04.838462 kernel: audit: type=1130 audit(1757725024.814:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:04.838422 systemd[1]: Stopping kubelet.service... Sep 13 00:57:04.840585 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:57:04.841033 systemd[1]: Stopped kubelet.service. Sep 13 00:57:04.845058 systemd[1]: Starting kubelet.service... Sep 13 00:57:04.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:04.868647 kernel: audit: type=1131 audit(1757725024.840:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:05.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:05.177161 systemd[1]: Started kubelet.service. Sep 13 00:57:05.202638 kernel: audit: type=1130 audit(1757725025.177:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:05.255772 kubelet[1867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:57:05.255772 kubelet[1867]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:57:05.255772 kubelet[1867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:57:05.256434 kubelet[1867]: I0913 00:57:05.255886 1867 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:57:05.706801 kubelet[1867]: I0913 00:57:05.706742 1867 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:57:05.706801 kubelet[1867]: I0913 00:57:05.706783 1867 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:57:05.707211 kubelet[1867]: I0913 00:57:05.707185 1867 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:57:05.778527 kubelet[1867]: E0913 00:57:05.778468 1867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:57:05.779866 kubelet[1867]: I0913 00:57:05.779830 1867 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:57:05.792108 kubelet[1867]: E0913 00:57:05.792031 1867 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:57:05.792356 kubelet[1867]: I0913 00:57:05.792316 1867 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:57:05.798658 kubelet[1867]: I0913 00:57:05.798619 1867 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:57:05.799117 kubelet[1867]: I0913 00:57:05.799093 1867 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:57:05.799361 kubelet[1867]: I0913 00:57:05.799313 1867 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:57:05.799633 kubelet[1867]: I0913 00:57:05.799349 1867 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:57:05.799852 kubelet[1867]: I0913 00:57:05.799656 1867 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:57:05.799852 kubelet[1867]: I0913 00:57:05.799674 1867 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:57:05.799852 kubelet[1867]: I0913 00:57:05.799837 1867 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:57:05.809525 kubelet[1867]: I0913 00:57:05.809438 1867 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:57:05.809525 kubelet[1867]: I0913 00:57:05.809526 1867 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:57:05.809783 kubelet[1867]: I0913 00:57:05.809584 1867 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:57:05.809783 kubelet[1867]: I0913 00:57:05.809631 1867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:57:05.811578 kubelet[1867]: W0913 00:57:05.811082 1867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4&limit=500&resourceVersion=0": dial tcp 10.128.0.69:6443: connect: connection refused Sep 13 00:57:05.811578 kubelet[1867]: E0913 00:57:05.811206 1867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4&limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:57:05.815888 kubelet[1867]: W0913 00:57:05.815814 1867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.69:6443: connect: connection refused Sep 13 00:57:05.816135 kubelet[1867]: E0913 00:57:05.816079 1867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:57:05.816248 kubelet[1867]: I0913 00:57:05.816224 1867 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:57:05.817467 kubelet[1867]: I0913 00:57:05.816945 1867 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:57:05.817467 kubelet[1867]: W0913 00:57:05.817038 1867 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:57:05.827329 kubelet[1867]: I0913 00:57:05.827278 1867 server.go:1274] "Started kubelet" Sep 13 00:57:05.828000 audit[1867]: AVC avc: denied { mac_admin } for pid=1867 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:05.850974 kubelet[1867]: I0913 00:57:05.829483 1867 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:57:05.850974 kubelet[1867]: I0913 00:57:05.829537 1867 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:57:05.850974 kubelet[1867]: I0913 00:57:05.829654 1867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:57:05.850974 kubelet[1867]: I0913 00:57:05.838415 1867 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:57:05.850974 kubelet[1867]: I0913 00:57:05.839819 1867 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:57:05.850974 kubelet[1867]: E0913 00:57:05.846095 1867 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:57:05.850974 kubelet[1867]: I0913 00:57:05.846272 1867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:57:05.850974 kubelet[1867]: I0913 00:57:05.849047 1867 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:57:05.850974 kubelet[1867]: E0913 00:57:05.849456 1867 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" not found" Sep 13 00:57:05.852399 kernel: audit: type=1400 audit(1757725025.828:194): avc: denied { mac_admin } for pid=1867 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:05.853059 kubelet[1867]: I0913 00:57:05.853024 1867 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:57:05.828000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:57:05.864742 kernel: audit: type=1401 audit(1757725025.828:194): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:57:05.864803 kubelet[1867]: I0913 00:57:05.854010 1867 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:57:05.864803 kubelet[1867]: E0913 00:57:05.855491 1867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4?timeout=10s\": dial tcp 10.128.0.69:6443: connect: connection refused" interval="200ms" Sep 13 00:57:05.864803 kubelet[1867]: I0913 00:57:05.855582 1867 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:57:05.864803 kubelet[1867]: I0913 00:57:05.855953 1867 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:57:05.864803 kubelet[1867]: I0913 00:57:05.856070 1867 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:57:05.864803 kubelet[1867]: I0913 00:57:05.859990 1867 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:57:05.865360 kubelet[1867]: W0913 00:57:05.865297 1867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.69:6443: connect: connection refused Sep 13 00:57:05.865517 kubelet[1867]: E0913 00:57:05.865487 1867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:57:05.828000 audit[1867]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008ae9f0 a1=c00097f9f8 a2=c0008ae9c0 a3=25 items=0 ppid=1 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.880951 kubelet[1867]: I0913 00:57:05.870627 1867 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:57:05.828000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:57:05.926525 kernel: audit: type=1300 audit(1757725025.828:194): arch=c000003e syscall=188 success=no exit=-22 a0=c0008ae9f0 a1=c00097f9f8 a2=c0008ae9c0 a3=25 items=0 ppid=1 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.926695 kernel: audit: type=1327 audit(1757725025.828:194): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:57:05.828000 audit[1867]: AVC avc: denied { mac_admin } for pid=1867 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:05.828000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:57:05.828000 audit[1867]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000939a00 a1=c00097fa10 a2=c0008aea80 a3=25 items=0 ppid=1 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.828000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:57:05.833000 audit[1878]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:05.833000 audit[1878]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd097695f0 a2=0 a3=7ffd097695dc items=0 ppid=1867 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:57:05.835000 audit[1879]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:05.835000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbc8b5990 a2=0 a3=7ffdbc8b597c items=0 ppid=1867 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:57:05.852000 audit[1881]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:05.852000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdd1a98d00 a2=0 a3=7ffdd1a98cec items=0 ppid=1867 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:57:05.928160 kubelet[1867]: E0913 00:57:05.923901 1867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4.1864b199213f2f9c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,UID:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,},FirstTimestamp:2025-09-13 00:57:05.827237788 +0000 UTC m=+0.637573434,LastTimestamp:2025-09-13 00:57:05.827237788 +0000 UTC m=+0.637573434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,}" Sep 13 00:57:05.930000 audit[1883]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:05.930000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffd11bbe10 a2=0 a3=7fffd11bbdfc items=0 ppid=1867 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.930000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:57:05.948000 audit[1888]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:05.948000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcbe5fc910 a2=0 a3=7ffcbe5fc8fc items=0 ppid=1867 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 00:57:05.949351 kubelet[1867]: I0913 00:57:05.949293 1867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:57:05.949587 kubelet[1867]: E0913 00:57:05.949561 1867 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" not found" Sep 13 00:57:05.950000 audit[1891]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:05.950000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeb97f58f0 a2=0 a3=7ffeb97f58dc items=0 ppid=1867 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:57:05.951453 kubelet[1867]: I0913 00:57:05.951315 1867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:57:05.951453 kubelet[1867]: I0913 00:57:05.951349 1867 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:57:05.951453 kubelet[1867]: I0913 00:57:05.951377 1867 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:57:05.951639 kubelet[1867]: E0913 00:57:05.951444 1867 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:57:05.953000 audit[1892]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:05.953000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6978b430 a2=0 a3=7fff6978b41c items=0 ppid=1867 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:57:05.953000 audit[1893]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:05.953000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffbe4cd4a0 a2=0 a3=7fffbe4cd48c items=0 ppid=1867 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:57:05.954936 kubelet[1867]: W0913 00:57:05.954898 1867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.69:6443: connect: connection refused Sep 13 00:57:05.955015 kubelet[1867]: E0913 00:57:05.954955 1867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:57:05.959023 kubelet[1867]: I0913 00:57:05.958907 1867 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:57:05.959023 kubelet[1867]: I0913 00:57:05.958930 1867 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:57:05.959023 kubelet[1867]: I0913 00:57:05.958964 1867 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:57:05.962000 audit[1896]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:05.962000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3f17f0d0 a2=0 a3=7ffd3f17f0bc items=0 ppid=1867 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:57:05.966478 kubelet[1867]: I0913 00:57:05.966437 1867 policy_none.go:49] "None policy: Start" Sep 13 00:57:05.966000 audit[1897]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:05.966000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffcdf2e87a0 a2=0 a3=7ffcdf2e878c items=0 ppid=1867 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:57:05.967474 kubelet[1867]: I0913 00:57:05.967448 1867 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:57:05.967581 kubelet[1867]: I0913 00:57:05.967487 1867 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:57:05.967000 audit[1898]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:05.967000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc93339a40 a2=0 a3=7ffc93339a2c items=0 ppid=1867 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:57:05.969000 audit[1899]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:05.969000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe2fda2c00 a2=0 a3=7ffe2fda2bec items=0 ppid=1867 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:57:05.981157 kubelet[1867]: I0913 00:57:05.981079 1867 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:57:05.980000 audit[1867]: AVC avc: denied { mac_admin } for pid=1867 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:05.980000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:57:05.980000 audit[1867]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009c7aa0 a1=c000a4aab0 a2=c0009c7a70 a3=25 items=0 ppid=1 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:05.980000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:57:05.981677 kubelet[1867]: I0913 00:57:05.981183 1867 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:57:05.981677 kubelet[1867]: I0913 00:57:05.981382 1867 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:57:05.981677 kubelet[1867]: I0913 00:57:05.981399 1867 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:57:05.982777 kubelet[1867]: I0913 00:57:05.982756 1867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:57:05.984906 kubelet[1867]: E0913 00:57:05.984869 1867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" not found" Sep 13 00:57:06.056399 kubelet[1867]: E0913 00:57:06.056341 1867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4?timeout=10s\": dial tcp 10.128.0.69:6443: connect: connection refused" interval="400ms" Sep 13 00:57:06.087913 kubelet[1867]: I0913 00:57:06.087856 1867 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.088379 kubelet[1867]: E0913 00:57:06.088331 1867 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.69:6443/api/v1/nodes\": dial tcp 10.128.0.69:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.157062 kubelet[1867]: I0913 00:57:06.156978 1867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a760936a5deb629b583abadb120aa137-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"a760936a5deb629b583abadb120aa137\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.157062 kubelet[1867]: I0913 00:57:06.157055 1867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.157354 kubelet[1867]: I0913 00:57:06.157088 1867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.157354 kubelet[1867]: I0913 00:57:06.157122 1867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.157354 kubelet[1867]: I0913 00:57:06.157150 1867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.157354 kubelet[1867]: I0913 00:57:06.157178 1867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/044d5bf1fc34ac3a54c79ca8f8ae2a98-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"044d5bf1fc34ac3a54c79ca8f8ae2a98\") " pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.157566 kubelet[1867]: I0913 00:57:06.157203 1867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a760936a5deb629b583abadb120aa137-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"a760936a5deb629b583abadb120aa137\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.157566 kubelet[1867]: I0913 00:57:06.157238 1867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.157566 kubelet[1867]: I0913 00:57:06.157268 1867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a760936a5deb629b583abadb120aa137-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"a760936a5deb629b583abadb120aa137\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.296170 kubelet[1867]: I0913 00:57:06.296026 1867 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.298189 kubelet[1867]: E0913 00:57:06.298137 1867 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.69:6443/api/v1/nodes\": dial tcp 10.128.0.69:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.369506 env[1325]: time="2025-09-13T00:57:06.369448972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,Uid:044d5bf1fc34ac3a54c79ca8f8ae2a98,Namespace:kube-system,Attempt:0,}" Sep 13 00:57:06.376881 env[1325]: time="2025-09-13T00:57:06.376825809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,Uid:a760936a5deb629b583abadb120aa137,Namespace:kube-system,Attempt:0,}" Sep 13 00:57:06.381742 env[1325]: time="2025-09-13T00:57:06.381686488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,Uid:144c3e8cad4e41a0bd106a473978c8fb,Namespace:kube-system,Attempt:0,}" Sep 13 00:57:06.456927 kubelet[1867]: E0913 00:57:06.456863 1867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4?timeout=10s\": dial tcp 10.128.0.69:6443: connect: connection refused" interval="800ms" Sep 13 00:57:06.704028 kubelet[1867]: I0913 00:57:06.703529 1867 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.704028 kubelet[1867]: E0913 00:57:06.703981 1867 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.69:6443/api/v1/nodes\": dial tcp 10.128.0.69:6443: connect: connection refused" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:06.784677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394648388.mount: Deactivated successfully. Sep 13 00:57:06.791448 kubelet[1867]: W0913 00:57:06.791326 1867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4&limit=500&resourceVersion=0": dial tcp 10.128.0.69:6443: connect: connection refused Sep 13 00:57:06.791656 kubelet[1867]: E0913 00:57:06.791429 1867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4&limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:57:06.792105 env[1325]: time="2025-09-13T00:57:06.792037149Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.796322 env[1325]: time="2025-09-13T00:57:06.796260850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.797498 env[1325]: time="2025-09-13T00:57:06.797448106Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.799142 env[1325]: time="2025-09-13T00:57:06.799073092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.801988 env[1325]: time="2025-09-13T00:57:06.801941808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.803175 env[1325]: time="2025-09-13T00:57:06.803108559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.805890 env[1325]: time="2025-09-13T00:57:06.805834159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.807256 env[1325]: time="2025-09-13T00:57:06.807209780Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.808330 env[1325]: time="2025-09-13T00:57:06.808285065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.809310 env[1325]: time="2025-09-13T00:57:06.809272525Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.814364 env[1325]: time="2025-09-13T00:57:06.814291055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.824192 kubelet[1867]: W0913 00:57:06.824135 1867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.69:6443: connect: connection refused Sep 13 00:57:06.824384 kubelet[1867]: E0913 00:57:06.824195 1867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:57:06.827501 env[1325]: time="2025-09-13T00:57:06.827447759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:06.856865 env[1325]: time="2025-09-13T00:57:06.841872576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:06.856865 env[1325]: time="2025-09-13T00:57:06.841940266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:06.856865 env[1325]: time="2025-09-13T00:57:06.841963503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:06.856865 env[1325]: time="2025-09-13T00:57:06.842494289Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e7ed6c6a3fcba78965badbf08be6332facda8256437138b234dec9cc969b99d pid=1908 runtime=io.containerd.runc.v2 Sep 13 00:57:06.879854 env[1325]: time="2025-09-13T00:57:06.879756872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:06.880055 env[1325]: time="2025-09-13T00:57:06.879870223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:06.880055 env[1325]: time="2025-09-13T00:57:06.879911505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:06.880210 env[1325]: time="2025-09-13T00:57:06.880165770Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95fe18f4e0167b06c3ebc8adac3cbad227a7e0158fb1fd9fa2b1a4275ad37410 pid=1929 runtime=io.containerd.runc.v2 Sep 13 00:57:06.908464 env[1325]: time="2025-09-13T00:57:06.908348314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:06.908842 env[1325]: time="2025-09-13T00:57:06.908521893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:06.908842 env[1325]: time="2025-09-13T00:57:06.908627308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:06.909097 env[1325]: time="2025-09-13T00:57:06.909008802Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a41f8af0d28dbf2594e867841724fdd8d46febe8b04687e6e71e528668c7dab pid=1952 runtime=io.containerd.runc.v2 Sep 13 00:57:07.047193 env[1325]: time="2025-09-13T00:57:07.045753198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,Uid:044d5bf1fc34ac3a54c79ca8f8ae2a98,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e7ed6c6a3fcba78965badbf08be6332facda8256437138b234dec9cc969b99d\"" Sep 13 00:57:07.049824 kubelet[1867]: E0913 00:57:07.049765 1867 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-8-nightly-20250912-2100-1db970da00b806" Sep 13 00:57:07.052478 env[1325]: time="2025-09-13T00:57:07.052389663Z" level=info msg="CreateContainer within sandbox \"6e7ed6c6a3fcba78965badbf08be6332facda8256437138b234dec9cc969b99d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:57:07.054986 kubelet[1867]: W0913 00:57:07.054864 1867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.69:6443: connect: connection refused Sep 13 00:57:07.055160 kubelet[1867]: E0913 00:57:07.055029 1867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:57:07.068226 env[1325]: time="2025-09-13T00:57:07.068145639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,Uid:a760936a5deb629b583abadb120aa137,Namespace:kube-system,Attempt:0,} returns sandbox id \"95fe18f4e0167b06c3ebc8adac3cbad227a7e0158fb1fd9fa2b1a4275ad37410\"" Sep 13 00:57:07.070400 kubelet[1867]: E0913 00:57:07.070353 1867 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806" Sep 13 00:57:07.074388 env[1325]: time="2025-09-13T00:57:07.074332342Z" level=info msg="CreateContainer within sandbox \"95fe18f4e0167b06c3ebc8adac3cbad227a7e0158fb1fd9fa2b1a4275ad37410\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:57:07.076367 env[1325]: time="2025-09-13T00:57:07.076311333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,Uid:144c3e8cad4e41a0bd106a473978c8fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a41f8af0d28dbf2594e867841724fdd8d46febe8b04687e6e71e528668c7dab\"" Sep 13 00:57:07.084777 kubelet[1867]: E0913 00:57:07.084725 1867 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db97" Sep 13 00:57:07.087720 env[1325]: time="2025-09-13T00:57:07.087663517Z" level=info msg="CreateContainer within sandbox \"7a41f8af0d28dbf2594e867841724fdd8d46febe8b04687e6e71e528668c7dab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:57:07.088897 env[1325]: time="2025-09-13T00:57:07.088832115Z" level=info msg="CreateContainer within sandbox \"6e7ed6c6a3fcba78965badbf08be6332facda8256437138b234dec9cc969b99d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2bffd2dab340b85e9977558ff9212420fb076da1639192805604b80f7a377485\"" Sep 13 00:57:07.089830 env[1325]: time="2025-09-13T00:57:07.089787617Z" level=info msg="StartContainer for \"2bffd2dab340b85e9977558ff9212420fb076da1639192805604b80f7a377485\"" Sep 13 00:57:07.106637 env[1325]: time="2025-09-13T00:57:07.106544742Z" level=info msg="CreateContainer within sandbox \"95fe18f4e0167b06c3ebc8adac3cbad227a7e0158fb1fd9fa2b1a4275ad37410\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7e334f519853049ac33cb98525d2d2e977ecb5b19bf46a1c85ba7336ee4965f8\"" Sep 13 00:57:07.107443 env[1325]: time="2025-09-13T00:57:07.107381741Z" level=info msg="StartContainer for \"7e334f519853049ac33cb98525d2d2e977ecb5b19bf46a1c85ba7336ee4965f8\"" Sep 13 00:57:07.112664 env[1325]: time="2025-09-13T00:57:07.112583881Z" level=info msg="CreateContainer within sandbox \"7a41f8af0d28dbf2594e867841724fdd8d46febe8b04687e6e71e528668c7dab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8983b6e4363a481824744042421cbb807e422fcd64991f06ee1abe508a6d59fb\"" Sep 13 00:57:07.113818 env[1325]: time="2025-09-13T00:57:07.113782537Z" level=info msg="StartContainer for \"8983b6e4363a481824744042421cbb807e422fcd64991f06ee1abe508a6d59fb\"" Sep 13 00:57:07.260396 kubelet[1867]: E0913 00:57:07.257571 1867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4?timeout=10s\": dial tcp 10.128.0.69:6443: connect: connection refused" interval="1.6s" Sep 13 00:57:07.282440 env[1325]: time="2025-09-13T00:57:07.282345925Z" level=info msg="StartContainer for \"2bffd2dab340b85e9977558ff9212420fb076da1639192805604b80f7a377485\" returns successfully" Sep 13 00:57:07.321508 kubelet[1867]: W0913 00:57:07.320433 1867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.69:6443: connect: connection refused Sep 13 00:57:07.322246 kubelet[1867]: E0913 00:57:07.322192 1867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.69:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:57:07.327102 env[1325]: time="2025-09-13T00:57:07.326928477Z" level=info msg="StartContainer for \"8983b6e4363a481824744042421cbb807e422fcd64991f06ee1abe508a6d59fb\" returns successfully" Sep 13 00:57:07.364167 env[1325]: time="2025-09-13T00:57:07.364024251Z" level=info msg="StartContainer for \"7e334f519853049ac33cb98525d2d2e977ecb5b19bf46a1c85ba7336ee4965f8\" returns successfully" Sep 13 00:57:07.510366 kubelet[1867]: I0913 00:57:07.509880 1867 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:10.631044 kubelet[1867]: I0913 00:57:10.630982 1867 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:10.631845 kubelet[1867]: E0913 00:57:10.631806 1867 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\": node \"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" not found" Sep 13 00:57:10.654846 kubelet[1867]: E0913 00:57:10.654683 1867 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4.1864b199213f2f9c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,UID:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,},FirstTimestamp:2025-09-13 00:57:05.827237788 +0000 UTC m=+0.637573434,LastTimestamp:2025-09-13 00:57:05.827237788 +0000 UTC m=+0.637573434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,}" Sep 13 00:57:10.760034 kubelet[1867]: E0913 00:57:10.759982 1867 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Sep 13 00:57:10.760906 kubelet[1867]: E0913 00:57:10.760752 1867 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4.1864b199216442c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,UID:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,},FirstTimestamp:2025-09-13 00:57:05.829667525 +0000 UTC m=+0.640003172,LastTimestamp:2025-09-13 00:57:05.829667525 +0000 UTC m=+0.640003172,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4,}" Sep 13 00:57:10.813989 kubelet[1867]: I0913 00:57:10.813932 1867 apiserver.go:52] "Watching apiserver" Sep 13 00:57:10.854250 kubelet[1867]: I0913 00:57:10.854184 1867 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:57:12.680767 systemd[1]: Reloading. Sep 13 00:57:12.800073 /usr/lib/systemd/system-generators/torcx-generator[2162]: time="2025-09-13T00:57:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:57:12.800134 /usr/lib/systemd/system-generators/torcx-generator[2162]: time="2025-09-13T00:57:12Z" level=info msg="torcx already run" Sep 13 00:57:12.918945 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:57:12.918974 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:57:12.943548 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:57:13.083040 systemd[1]: Stopping kubelet.service... Sep 13 00:57:13.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:13.103316 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:57:13.103842 systemd[1]: Stopped kubelet.service. Sep 13 00:57:13.107433 systemd[1]: Starting kubelet.service... Sep 13 00:57:13.130722 kernel: kauditd_printk_skb: 44 callbacks suppressed Sep 13 00:57:13.130887 kernel: audit: type=1131 audit(1757725033.102:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:13.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:13.400273 systemd[1]: Started kubelet.service. Sep 13 00:57:13.425633 kernel: audit: type=1130 audit(1757725033.402:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:13.497253 kubelet[2221]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:57:13.497253 kubelet[2221]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:57:13.497253 kubelet[2221]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:57:13.497253 kubelet[2221]: I0913 00:57:13.496555 2221 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:57:13.508554 kubelet[2221]: I0913 00:57:13.508266 2221 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:57:13.508554 kubelet[2221]: I0913 00:57:13.508303 2221 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:57:13.510600 kubelet[2221]: I0913 00:57:13.509340 2221 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:57:13.515961 kubelet[2221]: I0913 00:57:13.515838 2221 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:57:13.524443 kubelet[2221]: I0913 00:57:13.524398 2221 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:57:13.528667 kubelet[2221]: E0913 00:57:13.528596 2221 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:57:13.528667 kubelet[2221]: I0913 00:57:13.528662 2221 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:57:13.532436 kubelet[2221]: I0913 00:57:13.532405 2221 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:57:13.533079 kubelet[2221]: I0913 00:57:13.533039 2221 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:57:13.533316 kubelet[2221]: I0913 00:57:13.533231 2221 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:57:13.533560 kubelet[2221]: I0913 00:57:13.533307 2221 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:57:13.533743 kubelet[2221]: I0913 00:57:13.533566 2221 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:57:13.533743 kubelet[2221]: I0913 00:57:13.533583 2221 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:57:13.533743 kubelet[2221]: I0913 00:57:13.533640 2221 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:57:13.533919 kubelet[2221]: I0913 00:57:13.533818 2221 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:57:13.539870 kubelet[2221]: I0913 00:57:13.534692 2221 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:57:13.539870 kubelet[2221]: I0913 00:57:13.534767 2221 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:57:13.539870 kubelet[2221]: I0913 00:57:13.534785 2221 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:57:13.548972 kubelet[2221]: I0913 00:57:13.548941 2221 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:57:13.550432 kubelet[2221]: I0913 00:57:13.550383 2221 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:57:13.551504 kubelet[2221]: I0913 00:57:13.551485 2221 server.go:1274] "Started kubelet" Sep 13 00:57:13.556823 kubelet[2221]: I0913 00:57:13.556757 2221 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:57:13.555000 audit[2221]: AVC avc: denied { mac_admin } for pid=2221 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:13.557367 kubelet[2221]: I0913 00:57:13.557337 2221 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:57:13.557520 kubelet[2221]: I0913 00:57:13.557504 2221 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:57:13.560382 kubelet[2221]: I0913 00:57:13.560154 2221 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:57:13.569312 kubelet[2221]: I0913 00:57:13.569278 2221 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:57:13.578646 kernel: audit: type=1400 audit(1757725033.555:211): avc: denied { mac_admin } for pid=2221 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:13.555000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:57:13.591714 kernel: audit: type=1401 audit(1757725033.555:211): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:57:13.591837 kernel: audit: type=1300 audit(1757725033.555:211): arch=c000003e syscall=188 success=no exit=-22 a0=c000b71170 a1=c000b0dd28 a2=c000b71140 a3=25 items=0 ppid=1 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:13.555000 audit[2221]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b71170 a1=c000b0dd28 a2=c000b71140 a3=25 items=0 ppid=1 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:13.594990 kubelet[2221]: I0913 00:57:13.560799 2221 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:57:13.598125 kubelet[2221]: I0913 00:57:13.598096 2221 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:57:13.598445 kubelet[2221]: I0913 00:57:13.566270 2221 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:57:13.598586 kubelet[2221]: I0913 00:57:13.566246 2221 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:57:13.598864 kubelet[2221]: E0913 00:57:13.566515 2221 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" not found" Sep 13 00:57:13.598985 kubelet[2221]: I0913 00:57:13.562872 2221 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:57:13.601379 kubelet[2221]: I0913 00:57:13.595872 2221 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:57:13.601780 kubelet[2221]: I0913 00:57:13.601739 2221 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:57:13.602362 kubelet[2221]: I0913 00:57:13.602344 2221 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:57:13.617460 kubelet[2221]: I0913 00:57:13.617427 2221 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:57:13.555000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:57:13.651928 kernel: audit: type=1327 audit(1757725033.555:211): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:57:13.673837 kernel: audit: type=1400 audit(1757725033.555:212): avc: denied { mac_admin } for pid=2221 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:13.555000 audit[2221]: AVC avc: denied { mac_admin } for pid=2221 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:13.555000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:57:13.555000 audit[2221]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b48b20 a1=c000b0dd40 a2=c000b71200 a3=25 items=0 ppid=1 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:13.693390 kubelet[2221]: I0913 00:57:13.693341 2221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:57:13.716843 kubelet[2221]: I0913 00:57:13.716807 2221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:57:13.717079 kubelet[2221]: I0913 00:57:13.717063 2221 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:57:13.717219 kubelet[2221]: I0913 00:57:13.717202 2221 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:57:13.717433 kubelet[2221]: E0913 00:57:13.717394 2221 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:57:13.722919 kernel: audit: type=1401 audit(1757725033.555:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:57:13.723089 kernel: audit: type=1300 audit(1757725033.555:212): arch=c000003e syscall=188 success=no exit=-22 a0=c000b48b20 a1=c000b0dd40 a2=c000b71200 a3=25 items=0 ppid=1 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:13.723186 kernel: audit: type=1327 audit(1757725033.555:212): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:57:13.555000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:57:13.797401 kubelet[2221]: I0913 00:57:13.796020 2221 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:57:13.797401 kubelet[2221]: I0913 00:57:13.796043 2221 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:57:13.797401 kubelet[2221]: I0913 00:57:13.796069 2221 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:57:13.797401 kubelet[2221]: I0913 00:57:13.796318 2221 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:57:13.797401 kubelet[2221]: I0913 00:57:13.796336 2221 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:57:13.797401 kubelet[2221]: I0913 00:57:13.796363 2221 policy_none.go:49] "None policy: Start" Sep 13 00:57:13.797401 kubelet[2221]: I0913 00:57:13.797184 2221 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:57:13.797401 kubelet[2221]: I0913 00:57:13.797223 2221 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:57:13.797968 kubelet[2221]: I0913 00:57:13.797438 2221 state_mem.go:75] "Updated machine memory state" Sep 13 00:57:13.797000 audit[2221]: AVC avc: denied { mac_admin } for pid=2221 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:13.797000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:57:13.797000 audit[2221]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00076f890 a1=c000ac5818 a2=c00076f860 a3=25 items=0 ppid=1 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:13.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:57:13.800117 kubelet[2221]: I0913 00:57:13.799490 2221 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:57:13.800117 kubelet[2221]: I0913 00:57:13.799601 2221 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:57:13.800117 kubelet[2221]: I0913 00:57:13.799826 2221 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:57:13.800117 kubelet[2221]: I0913 00:57:13.799843 2221 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:57:13.804752 kubelet[2221]: I0913 00:57:13.804079 2221 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:57:13.834157 kubelet[2221]: W0913 00:57:13.834113 2221 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:57:13.834483 kubelet[2221]: W0913 00:57:13.834399 2221 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:57:13.835573 kubelet[2221]: W0913 00:57:13.834585 2221 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:57:13.913539 kubelet[2221]: I0913 00:57:13.913479 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.913772 kubelet[2221]: I0913 00:57:13.913638 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.913772 kubelet[2221]: I0913 00:57:13.913682 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a760936a5deb629b583abadb120aa137-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"a760936a5deb629b583abadb120aa137\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.913772 kubelet[2221]: I0913 00:57:13.913713 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.913772 kubelet[2221]: I0913 00:57:13.913740 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.914007 kubelet[2221]: I0913 00:57:13.913767 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/144c3e8cad4e41a0bd106a473978c8fb-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"144c3e8cad4e41a0bd106a473978c8fb\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.914007 kubelet[2221]: I0913 00:57:13.913796 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/044d5bf1fc34ac3a54c79ca8f8ae2a98-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"044d5bf1fc34ac3a54c79ca8f8ae2a98\") " pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.914007 kubelet[2221]: I0913 00:57:13.913827 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a760936a5deb629b583abadb120aa137-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"a760936a5deb629b583abadb120aa137\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.914007 kubelet[2221]: I0913 00:57:13.913858 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a760936a5deb629b583abadb120aa137-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" (UID: \"a760936a5deb629b583abadb120aa137\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.920781 kubelet[2221]: I0913 00:57:13.920753 2221 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.930388 kubelet[2221]: I0913 00:57:13.930245 2221 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:13.930761 kubelet[2221]: I0913 00:57:13.930738 2221 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:14.546748 kubelet[2221]: I0913 00:57:14.546688 2221 apiserver.go:52] "Watching apiserver" Sep 13 00:57:14.599714 kubelet[2221]: I0913 00:57:14.599660 2221 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:57:14.784936 kubelet[2221]: W0913 00:57:14.784902 2221 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:57:14.785273 kubelet[2221]: E0913 00:57:14.785244 2221 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:14.828503 kubelet[2221]: I0913 00:57:14.828320 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" podStartSLOduration=1.8282956989999999 podStartE2EDuration="1.828295699s" podCreationTimestamp="2025-09-13 00:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:57:14.817279368 +0000 UTC m=+1.403773387" watchObservedRunningTime="2025-09-13 00:57:14.828295699 +0000 UTC m=+1.414789715" Sep 13 00:57:14.839232 kubelet[2221]: I0913 00:57:14.839144 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" podStartSLOduration=1.8391037670000001 podStartE2EDuration="1.839103767s" podCreationTimestamp="2025-09-13 00:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:57:14.828720157 +0000 UTC m=+1.415214176" watchObservedRunningTime="2025-09-13 00:57:14.839103767 +0000 UTC m=+1.425597786" Sep 13 00:57:14.854854 kubelet[2221]: I0913 00:57:14.854757 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" podStartSLOduration=1.854730395 podStartE2EDuration="1.854730395s" podCreationTimestamp="2025-09-13 00:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:57:14.839487804 +0000 UTC m=+1.425981822" watchObservedRunningTime="2025-09-13 00:57:14.854730395 +0000 UTC m=+1.441224421" Sep 13 00:57:18.782450 update_engine[1315]: I0913 00:57:18.782383 1315 update_attempter.cc:509] Updating boot flags... Sep 13 00:57:19.310960 kubelet[2221]: I0913 00:57:19.310922 2221 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:57:19.311761 env[1325]: time="2025-09-13T00:57:19.311378967Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:57:19.312288 kubelet[2221]: I0913 00:57:19.311770 2221 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:57:19.952950 kubelet[2221]: I0913 00:57:19.952890 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a41668d-694e-4cf6-91d8-1fba9bc9c2b7-kube-proxy\") pod \"kube-proxy-5zhpg\" (UID: \"3a41668d-694e-4cf6-91d8-1fba9bc9c2b7\") " pod="kube-system/kube-proxy-5zhpg" Sep 13 00:57:19.952950 kubelet[2221]: I0913 00:57:19.952943 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a41668d-694e-4cf6-91d8-1fba9bc9c2b7-xtables-lock\") pod \"kube-proxy-5zhpg\" (UID: \"3a41668d-694e-4cf6-91d8-1fba9bc9c2b7\") " pod="kube-system/kube-proxy-5zhpg" Sep 13 00:57:19.953262 kubelet[2221]: I0913 00:57:19.952973 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfqzm\" (UniqueName: \"kubernetes.io/projected/3a41668d-694e-4cf6-91d8-1fba9bc9c2b7-kube-api-access-nfqzm\") pod \"kube-proxy-5zhpg\" (UID: \"3a41668d-694e-4cf6-91d8-1fba9bc9c2b7\") " pod="kube-system/kube-proxy-5zhpg" Sep 13 00:57:19.953262 kubelet[2221]: I0913 00:57:19.953002 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a41668d-694e-4cf6-91d8-1fba9bc9c2b7-lib-modules\") pod \"kube-proxy-5zhpg\" (UID: \"3a41668d-694e-4cf6-91d8-1fba9bc9c2b7\") " pod="kube-system/kube-proxy-5zhpg" Sep 13 00:57:20.065168 kubelet[2221]: I0913 00:57:20.065111 2221 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:57:20.206992 env[1325]: time="2025-09-13T00:57:20.206847367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5zhpg,Uid:3a41668d-694e-4cf6-91d8-1fba9bc9c2b7,Namespace:kube-system,Attempt:0,}" Sep 13 00:57:20.236787 env[1325]: time="2025-09-13T00:57:20.236680820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:20.236787 env[1325]: time="2025-09-13T00:57:20.236740482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:20.237168 env[1325]: time="2025-09-13T00:57:20.236758716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:20.237168 env[1325]: time="2025-09-13T00:57:20.237011638Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/963816c3eda481fa5ab86afd15cf4f8ca7b1ea5f5e54bb8fe451c3dc15b8c3c7 pid=2288 runtime=io.containerd.runc.v2 Sep 13 00:57:20.310514 env[1325]: time="2025-09-13T00:57:20.310048726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5zhpg,Uid:3a41668d-694e-4cf6-91d8-1fba9bc9c2b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"963816c3eda481fa5ab86afd15cf4f8ca7b1ea5f5e54bb8fe451c3dc15b8c3c7\"" Sep 13 00:57:20.315260 env[1325]: time="2025-09-13T00:57:20.314559141Z" level=info msg="CreateContainer within sandbox \"963816c3eda481fa5ab86afd15cf4f8ca7b1ea5f5e54bb8fe451c3dc15b8c3c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:57:20.342555 env[1325]: time="2025-09-13T00:57:20.342481148Z" level=info msg="CreateContainer within sandbox \"963816c3eda481fa5ab86afd15cf4f8ca7b1ea5f5e54bb8fe451c3dc15b8c3c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"96851ac4d5251ed066c79a04e84aa79b9141ff473dd4bd7b705f3964c0a0b2c0\"" Sep 13 00:57:20.345307 env[1325]: time="2025-09-13T00:57:20.343482475Z" level=info msg="StartContainer for \"96851ac4d5251ed066c79a04e84aa79b9141ff473dd4bd7b705f3964c0a0b2c0\"" Sep 13 00:57:20.459670 env[1325]: time="2025-09-13T00:57:20.458972764Z" level=info msg="StartContainer for \"96851ac4d5251ed066c79a04e84aa79b9141ff473dd4bd7b705f3964c0a0b2c0\" returns successfully" Sep 13 00:57:20.558261 kubelet[2221]: I0913 00:57:20.558209 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34482827-1da2-4d78-bf1c-7f7e28fbf4c0-var-lib-calico\") pod \"tigera-operator-58fc44c59b-7f6cx\" (UID: \"34482827-1da2-4d78-bf1c-7f7e28fbf4c0\") " pod="tigera-operator/tigera-operator-58fc44c59b-7f6cx" Sep 13 00:57:20.559042 kubelet[2221]: I0913 00:57:20.559010 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml52h\" (UniqueName: \"kubernetes.io/projected/34482827-1da2-4d78-bf1c-7f7e28fbf4c0-kube-api-access-ml52h\") pod \"tigera-operator-58fc44c59b-7f6cx\" (UID: \"34482827-1da2-4d78-bf1c-7f7e28fbf4c0\") " pod="tigera-operator/tigera-operator-58fc44c59b-7f6cx" Sep 13 00:57:20.623000 audit[2388]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.629351 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:57:20.629519 kernel: audit: type=1325 audit(1757725040.623:214): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.624000 audit[2389]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.660787 kernel: audit: type=1325 audit(1757725040.624:215): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.624000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeadcc5420 a2=0 a3=7ffeadcc540c items=0 ppid=2341 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.693358 kernel: audit: type=1300 audit(1757725040.624:215): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeadcc5420 a2=0 a3=7ffeadcc540c items=0 ppid=2341 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.624000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:57:20.709674 kernel: audit: type=1327 audit(1757725040.624:215): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:57:20.623000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8f892e30 a2=0 a3=7ffd8f892e1c items=0 ppid=2341 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.742696 kernel: audit: type=1300 audit(1757725040.623:214): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8f892e30 a2=0 a3=7ffd8f892e1c items=0 ppid=2341 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:57:20.760633 kernel: audit: type=1327 audit(1757725040.623:214): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:57:20.634000 audit[2391]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.775552 env[1325]: time="2025-09-13T00:57:20.775040619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-7f6cx,Uid:34482827-1da2-4d78-bf1c-7f7e28fbf4c0,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:57:20.634000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecaf2e880 a2=0 a3=7ffecaf2e86c items=0 ppid=2341 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.809294 kernel: audit: type=1325 audit(1757725040.634:216): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.809466 kernel: audit: type=1300 audit(1757725040.634:216): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecaf2e880 a2=0 a3=7ffecaf2e86c items=0 ppid=2341 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.634000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:57:20.841036 kernel: audit: type=1327 audit(1757725040.634:216): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:57:20.841200 kernel: audit: type=1325 audit(1757725040.637:217): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.637000 audit[2392]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.637000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7d5854c0 a2=0 a3=7fff7d5854ac items=0 ppid=2341 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:57:20.649000 audit[2390]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.649000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd33d0f50 a2=0 a3=7ffdd33d0f3c items=0 ppid=2341 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:57:20.654000 audit[2393]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.654000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf9277d80 a2=0 a3=7ffcf9277d6c items=0 ppid=2341 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:57:20.749000 audit[2395]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.749000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffddb3e3a80 a2=0 a3=7ffddb3e3a6c items=0 ppid=2341 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:57:20.775000 audit[2397]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.775000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe0c0f1470 a2=0 a3=7ffe0c0f145c items=0 ppid=2341 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.775000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 00:57:20.801000 audit[2400]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.801000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe8925e670 a2=0 a3=7ffe8925e65c items=0 ppid=2341 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 00:57:20.817000 audit[2401]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.817000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc37db800 a2=0 a3=7fffc37db7ec items=0 ppid=2341 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:57:20.823000 audit[2403]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.823000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc100a1fe0 a2=0 a3=7ffc100a1fcc items=0 ppid=2341 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:57:20.825000 audit[2404]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.825000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd800446a0 a2=0 a3=7ffd8004468c items=0 ppid=2341 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:57:20.831000 audit[2406]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.831000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4c278590 a2=0 a3=7ffd4c27857c items=0 ppid=2341 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:57:20.838000 audit[2409]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.838000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffddee31060 a2=0 a3=7ffddee3104c items=0 ppid=2341 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 00:57:20.840000 audit[2410]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.840000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5ba5f760 a2=0 a3=7ffe5ba5f74c items=0 ppid=2341 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:57:20.849000 audit[2413]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.849000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd62ffd170 a2=0 a3=7ffd62ffd15c items=0 ppid=2341 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:57:20.851000 audit[2414]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.851000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd517959e0 a2=0 a3=7ffd517959cc items=0 ppid=2341 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:57:20.856000 audit[2423]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.856000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd75230140 a2=0 a3=7ffd7523012c items=0 ppid=2341 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:57:20.863263 env[1325]: time="2025-09-13T00:57:20.863169681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:20.863547 env[1325]: time="2025-09-13T00:57:20.863490658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:20.863791 env[1325]: time="2025-09-13T00:57:20.863741248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:20.864191 env[1325]: time="2025-09-13T00:57:20.864142568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aee0578f559ab729a0b70dfdb45322bd3a8599a873304a4a855ea53cdbd0ee37 pid=2424 runtime=io.containerd.runc.v2 Sep 13 00:57:20.866000 audit[2435]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.866000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd9e9a81f0 a2=0 a3=7ffd9e9a81dc items=0 ppid=2341 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:57:20.873000 audit[2441]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.873000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff132a3530 a2=0 a3=7fff132a351c items=0 ppid=2341 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:57:20.876000 audit[2446]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.876000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc1f59aa0 a2=0 a3=7fffc1f59a8c items=0 ppid=2341 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:57:20.881000 audit[2449]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.881000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff3624cda0 a2=0 a3=7fff3624cd8c items=0 ppid=2341 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:57:20.889000 audit[2454]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.889000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefc2d7280 a2=0 a3=7ffefc2d726c items=0 ppid=2341 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:57:20.892000 audit[2455]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.892000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe2cfe4b0 a2=0 a3=7fffe2cfe49c items=0 ppid=2341 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:57:20.898000 audit[2457]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:57:20.898000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd709e8da0 a2=0 a3=7ffd709e8d8c items=0 ppid=2341 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:57:20.950000 audit[2471]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:20.950000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc35a65d30 a2=0 a3=7ffc35a65d1c items=0 ppid=2341 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:20.966425 env[1325]: time="2025-09-13T00:57:20.966285674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-7f6cx,Uid:34482827-1da2-4d78-bf1c-7f7e28fbf4c0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aee0578f559ab729a0b70dfdb45322bd3a8599a873304a4a855ea53cdbd0ee37\"" Sep 13 00:57:20.968000 audit[2471]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:20.968000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc35a65d30 a2=0 a3=7ffc35a65d1c items=0 ppid=2341 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.968000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:20.973000 audit[2482]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.973000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcea3887b0 a2=0 a3=7ffcea38879c items=0 ppid=2341 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:57:20.976216 env[1325]: time="2025-09-13T00:57:20.975270707Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:57:20.979000 audit[2484]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.979000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcc46f9120 a2=0 a3=7ffcc46f910c items=0 ppid=2341 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 00:57:20.984000 audit[2487]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.984000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdac57e8c0 a2=0 a3=7ffdac57e8ac items=0 ppid=2341 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 00:57:20.986000 audit[2488]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.986000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4ee644f0 a2=0 a3=7ffd4ee644dc items=0 ppid=2341 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:57:20.989000 audit[2490]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.989000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea9a2e7e0 a2=0 a3=7ffea9a2e7cc items=0 ppid=2341 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.989000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:57:20.991000 audit[2491]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.991000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc64d5b80 a2=0 a3=7ffdc64d5b6c items=0 ppid=2341 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.991000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:57:20.995000 audit[2493]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:20.995000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcedfa9450 a2=0 a3=7ffcedfa943c items=0 ppid=2341 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:20.995000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 00:57:21.001000 audit[2496]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.001000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff6da7a4a0 a2=0 a3=7fff6da7a48c items=0 ppid=2341 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:57:21.003000 audit[2497]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.003000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe21c89900 a2=0 a3=7ffe21c898ec items=0 ppid=2341 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:57:21.006000 audit[2499]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.006000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec35bf8e0 a2=0 a3=7ffec35bf8cc items=0 ppid=2341 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:57:21.009000 audit[2500]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.009000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc95fe3970 a2=0 a3=7ffc95fe395c items=0 ppid=2341 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.009000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:57:21.012000 audit[2502]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.012000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe72d651f0 a2=0 a3=7ffe72d651dc items=0 ppid=2341 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:57:21.018000 audit[2505]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.018000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9a7da100 a2=0 a3=7ffe9a7da0ec items=0 ppid=2341 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:57:21.023000 audit[2508]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.023000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffffa7d1fc0 a2=0 a3=7ffffa7d1fac items=0 ppid=2341 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 00:57:21.025000 audit[2509]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.025000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc01bb0b0 a2=0 a3=7fffc01bb09c items=0 ppid=2341 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:57:21.029000 audit[2511]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.029000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffde5b43090 a2=0 a3=7ffde5b4307c items=0 ppid=2341 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:57:21.035000 audit[2514]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.035000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffcd0837100 a2=0 a3=7ffcd08370ec items=0 ppid=2341 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.035000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:57:21.037000 audit[2515]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.037000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe34ad9c40 a2=0 a3=7ffe34ad9c2c items=0 ppid=2341 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:57:21.041000 audit[2517]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.041000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe5569bda0 a2=0 a3=7ffe5569bd8c items=0 ppid=2341 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.041000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:57:21.043000 audit[2518]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.043000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff01614470 a2=0 a3=7fff0161445c items=0 ppid=2341 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.043000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:57:21.047000 audit[2520]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.047000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe61c64380 a2=0 a3=7ffe61c6436c items=0 ppid=2341 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:57:21.053000 audit[2523]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:57:21.053000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc9fde49a0 a2=0 a3=7ffc9fde498c items=0 ppid=2341 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:57:21.060000 audit[2525]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:57:21.060000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc74eaf160 a2=0 a3=7ffc74eaf14c items=0 ppid=2341 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.060000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:21.061000 audit[2525]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:57:21.061000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc74eaf160 a2=0 a3=7ffc74eaf14c items=0 ppid=2341 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:21.061000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:21.078902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336052759.mount: Deactivated successfully. Sep 13 00:57:21.103416 kubelet[2221]: I0913 00:57:21.103329 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5zhpg" podStartSLOduration=2.103303461 podStartE2EDuration="2.103303461s" podCreationTimestamp="2025-09-13 00:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:57:20.804727999 +0000 UTC m=+7.391222017" watchObservedRunningTime="2025-09-13 00:57:21.103303461 +0000 UTC m=+7.689797482" Sep 13 00:57:22.125067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135166677.mount: Deactivated successfully. Sep 13 00:57:23.508029 env[1325]: time="2025-09-13T00:57:23.507963713Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:23.510790 env[1325]: time="2025-09-13T00:57:23.510693332Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:23.512727 env[1325]: time="2025-09-13T00:57:23.512681658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:23.514915 env[1325]: time="2025-09-13T00:57:23.514855065Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:23.516165 env[1325]: time="2025-09-13T00:57:23.516104115Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:57:23.520353 env[1325]: time="2025-09-13T00:57:23.520304993Z" level=info msg="CreateContainer within sandbox \"aee0578f559ab729a0b70dfdb45322bd3a8599a873304a4a855ea53cdbd0ee37\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:57:23.540029 env[1325]: time="2025-09-13T00:57:23.539965014Z" level=info msg="CreateContainer within sandbox \"aee0578f559ab729a0b70dfdb45322bd3a8599a873304a4a855ea53cdbd0ee37\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e368a4957092693b4b36db1b8aa8b085dfdd1a3e654c7c0ec9735f3f5e3f35a2\"" Sep 13 00:57:23.541191 env[1325]: time="2025-09-13T00:57:23.541150766Z" level=info msg="StartContainer for \"e368a4957092693b4b36db1b8aa8b085dfdd1a3e654c7c0ec9735f3f5e3f35a2\"" Sep 13 00:57:23.596069 systemd[1]: run-containerd-runc-k8s.io-e368a4957092693b4b36db1b8aa8b085dfdd1a3e654c7c0ec9735f3f5e3f35a2-runc.EbxbqS.mount: Deactivated successfully. Sep 13 00:57:23.656591 env[1325]: time="2025-09-13T00:57:23.656526500Z" level=info msg="StartContainer for \"e368a4957092693b4b36db1b8aa8b085dfdd1a3e654c7c0ec9735f3f5e3f35a2\" returns successfully" Sep 13 00:57:26.897641 kubelet[2221]: I0913 00:57:26.897545 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-7f6cx" podStartSLOduration=4.3523996050000004 podStartE2EDuration="6.897519878s" podCreationTimestamp="2025-09-13 00:57:20 +0000 UTC" firstStartedPulling="2025-09-13 00:57:20.972540764 +0000 UTC m=+7.559034773" lastFinishedPulling="2025-09-13 00:57:23.517661051 +0000 UTC m=+10.104155046" observedRunningTime="2025-09-13 00:57:23.809957858 +0000 UTC m=+10.396451879" watchObservedRunningTime="2025-09-13 00:57:26.897519878 +0000 UTC m=+13.484013897" Sep 13 00:57:31.245348 sudo[1584]: pam_unix(sudo:session): session closed for user root Sep 13 00:57:31.243000 audit[1584]: USER_END pid=1584 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:57:31.250946 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 00:57:31.251066 kernel: audit: type=1106 audit(1757725051.243:265): pid=1584 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:57:31.275000 audit[1584]: CRED_DISP pid=1584 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:57:31.302645 kernel: audit: type=1104 audit(1757725051.275:266): pid=1584 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:57:31.337707 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 13 00:57:31.337000 audit[1580]: USER_END pid=1580 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:57:31.372655 kernel: audit: type=1106 audit(1757725051.337:267): pid=1580 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:57:31.376036 systemd[1]: sshd@6-10.128.0.69:22-139.178.68.195:41676.service: Deactivated successfully. Sep 13 00:57:31.377318 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:57:31.378122 systemd-logind[1310]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:57:31.382767 systemd-logind[1310]: Removed session 7. Sep 13 00:57:31.370000 audit[1580]: CRED_DISP pid=1580 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:57:31.408662 kernel: audit: type=1104 audit(1757725051.370:268): pid=1580 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:57:31.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.69:22-139.178.68.195:41676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:31.433792 kernel: audit: type=1131 audit(1757725051.374:269): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.69:22-139.178.68.195:41676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:57:32.945000 audit[2609]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:32.964644 kernel: audit: type=1325 audit(1757725052.945:270): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:32.945000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffefcb51530 a2=0 a3=7ffefcb5151c items=0 ppid=2341 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:33.028949 kernel: audit: type=1300 audit(1757725052.945:270): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffefcb51530 a2=0 a3=7ffefcb5151c items=0 ppid=2341 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:33.029102 kernel: audit: type=1327 audit(1757725052.945:270): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:32.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:32.984000 audit[2609]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:33.048753 kernel: audit: type=1325 audit(1757725052.984:271): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:32.984000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffefcb51530 a2=0 a3=0 items=0 ppid=2341 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:33.083636 kernel: audit: type=1300 audit(1757725052.984:271): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffefcb51530 a2=0 a3=0 items=0 ppid=2341 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:32.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:33.108000 audit[2611]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:33.108000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdcf3610b0 a2=0 a3=7ffdcf36109c items=0 ppid=2341 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:33.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:33.113000 audit[2611]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:33.113000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdcf3610b0 a2=0 a3=0 items=0 ppid=2341 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:33.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:36.748669 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:57:36.748820 kernel: audit: type=1325 audit(1757725056.742:274): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:36.742000 audit[2614]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:36.742000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffca3a3ea60 a2=0 a3=7ffca3a3ea4c items=0 ppid=2341 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:36.802755 kernel: audit: type=1300 audit(1757725056.742:274): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffca3a3ea60 a2=0 a3=7ffca3a3ea4c items=0 ppid=2341 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:36.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:36.843653 kernel: audit: type=1327 audit(1757725056.742:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:36.797000 audit[2614]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:36.867641 kernel: audit: type=1325 audit(1757725056.797:275): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:36.797000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca3a3ea60 a2=0 a3=0 items=0 ppid=2341 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:36.900638 kernel: audit: type=1300 audit(1757725056.797:275): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca3a3ea60 a2=0 a3=0 items=0 ppid=2341 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:36.797000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:36.917643 kernel: audit: type=1327 audit(1757725056.797:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:36.834000 audit[2616]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:36.937636 kernel: audit: type=1325 audit(1757725056.834:276): table=filter:95 family=2 entries=18 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:36.834000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc8836da80 a2=0 a3=7ffc8836da6c items=0 ppid=2341 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:36.834000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:36.986729 kernel: audit: type=1300 audit(1757725056.834:276): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc8836da80 a2=0 a3=7ffc8836da6c items=0 ppid=2341 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:36.986899 kernel: audit: type=1327 audit(1757725056.834:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:36.853000 audit[2616]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:36.853000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc8836da80 a2=0 a3=0 items=0 ppid=2341 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:37.005125 kernel: audit: type=1325 audit(1757725056.853:277): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:36.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:37.404362 kubelet[2221]: I0913 00:57:37.404299 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bc5b46b6-7538-4c7e-bdc8-c67cdc18e6f9-typha-certs\") pod \"calico-typha-796f7d58cf-mcpr8\" (UID: \"bc5b46b6-7538-4c7e-bdc8-c67cdc18e6f9\") " pod="calico-system/calico-typha-796f7d58cf-mcpr8" Sep 13 00:57:37.404362 kubelet[2221]: I0913 00:57:37.404357 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56nr2\" (UniqueName: \"kubernetes.io/projected/bc5b46b6-7538-4c7e-bdc8-c67cdc18e6f9-kube-api-access-56nr2\") pod \"calico-typha-796f7d58cf-mcpr8\" (UID: \"bc5b46b6-7538-4c7e-bdc8-c67cdc18e6f9\") " pod="calico-system/calico-typha-796f7d58cf-mcpr8" Sep 13 00:57:37.404977 kubelet[2221]: I0913 00:57:37.404397 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc5b46b6-7538-4c7e-bdc8-c67cdc18e6f9-tigera-ca-bundle\") pod \"calico-typha-796f7d58cf-mcpr8\" (UID: \"bc5b46b6-7538-4c7e-bdc8-c67cdc18e6f9\") " pod="calico-system/calico-typha-796f7d58cf-mcpr8" Sep 13 00:57:37.553719 env[1325]: time="2025-09-13T00:57:37.553653945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796f7d58cf-mcpr8,Uid:bc5b46b6-7538-4c7e-bdc8-c67cdc18e6f9,Namespace:calico-system,Attempt:0,}" Sep 13 00:57:37.586914 env[1325]: time="2025-09-13T00:57:37.586800355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:37.587114 env[1325]: time="2025-09-13T00:57:37.586931988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:37.587114 env[1325]: time="2025-09-13T00:57:37.586973701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:37.587256 env[1325]: time="2025-09-13T00:57:37.587204820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e22fe93a8a3c8275bb0741a37f86779bb2d92fe0a5c30f793c5e2cb47fca7fd0 pid=2625 runtime=io.containerd.runc.v2 Sep 13 00:57:37.844478 env[1325]: time="2025-09-13T00:57:37.844327454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796f7d58cf-mcpr8,Uid:bc5b46b6-7538-4c7e-bdc8-c67cdc18e6f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"e22fe93a8a3c8275bb0741a37f86779bb2d92fe0a5c30f793c5e2cb47fca7fd0\"" Sep 13 00:57:37.847111 env[1325]: time="2025-09-13T00:57:37.847057261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:57:37.877682 kubelet[2221]: E0913 00:57:37.877599 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjjfb" podUID="6497f54b-f081-4e3e-89dc-fe9b1d7d52c2" Sep 13 00:57:37.909831 kubelet[2221]: I0913 00:57:37.909070 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-var-run-calico\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.909831 kubelet[2221]: I0913 00:57:37.909148 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-node-certs\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.909831 kubelet[2221]: I0913 00:57:37.909196 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rc2\" (UniqueName: \"kubernetes.io/projected/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-kube-api-access-x6rc2\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.909831 kubelet[2221]: I0913 00:57:37.909231 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-cni-net-dir\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.909831 kubelet[2221]: I0913 00:57:37.909275 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-flexvol-driver-host\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.910267 kubelet[2221]: I0913 00:57:37.909301 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-xtables-lock\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.910267 kubelet[2221]: I0913 00:57:37.909344 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-cni-bin-dir\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.910267 kubelet[2221]: I0913 00:57:37.909370 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-lib-modules\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.910267 kubelet[2221]: I0913 00:57:37.909396 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-policysync\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.910267 kubelet[2221]: I0913 00:57:37.909440 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-cni-log-dir\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.910501 kubelet[2221]: I0913 00:57:37.909469 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-tigera-ca-bundle\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.910501 kubelet[2221]: I0913 00:57:37.909528 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5d9e403-1092-42c3-bb01-86c5b0fc3bdc-var-lib-calico\") pod \"calico-node-bczc8\" (UID: \"b5d9e403-1092-42c3-bb01-86c5b0fc3bdc\") " pod="calico-system/calico-node-bczc8" Sep 13 00:57:37.931000 audit[2659]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:37.931000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc233741d0 a2=0 a3=7ffc233741bc items=0 ppid=2341 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:37.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:37.933000 audit[2659]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:37.933000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc233741d0 a2=0 a3=0 items=0 ppid=2341 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:37.933000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:38.011038 kubelet[2221]: I0913 00:57:38.010984 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6497f54b-f081-4e3e-89dc-fe9b1d7d52c2-kubelet-dir\") pod \"csi-node-driver-bjjfb\" (UID: \"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2\") " pod="calico-system/csi-node-driver-bjjfb" Sep 13 00:57:38.011038 kubelet[2221]: I0913 00:57:38.011042 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6497f54b-f081-4e3e-89dc-fe9b1d7d52c2-socket-dir\") pod \"csi-node-driver-bjjfb\" (UID: \"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2\") " pod="calico-system/csi-node-driver-bjjfb" Sep 13 00:57:38.011307 kubelet[2221]: I0913 00:57:38.011069 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6497f54b-f081-4e3e-89dc-fe9b1d7d52c2-varrun\") pod \"csi-node-driver-bjjfb\" (UID: \"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2\") " pod="calico-system/csi-node-driver-bjjfb" Sep 13 00:57:38.011307 kubelet[2221]: I0913 00:57:38.011114 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8wz\" (UniqueName: \"kubernetes.io/projected/6497f54b-f081-4e3e-89dc-fe9b1d7d52c2-kube-api-access-rd8wz\") pod \"csi-node-driver-bjjfb\" (UID: \"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2\") " pod="calico-system/csi-node-driver-bjjfb" Sep 13 00:57:38.011307 kubelet[2221]: I0913 00:57:38.011154 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6497f54b-f081-4e3e-89dc-fe9b1d7d52c2-registration-dir\") pod \"csi-node-driver-bjjfb\" (UID: \"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2\") " pod="calico-system/csi-node-driver-bjjfb" Sep 13 00:57:38.019023 kubelet[2221]: E0913 00:57:38.018564 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.019023 kubelet[2221]: W0913 00:57:38.018592 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.019023 kubelet[2221]: E0913 00:57:38.018641 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.027503 kubelet[2221]: E0913 00:57:38.027451 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.027503 kubelet[2221]: W0913 00:57:38.027497 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.027777 kubelet[2221]: E0913 00:57:38.027526 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.050483 env[1325]: time="2025-09-13T00:57:38.050421263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bczc8,Uid:b5d9e403-1092-42c3-bb01-86c5b0fc3bdc,Namespace:calico-system,Attempt:0,}" Sep 13 00:57:38.071696 env[1325]: time="2025-09-13T00:57:38.071557127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:38.071951 env[1325]: time="2025-09-13T00:57:38.071667223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:38.071951 env[1325]: time="2025-09-13T00:57:38.071904603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:38.072711 env[1325]: time="2025-09-13T00:57:38.072440341Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/194980b96db16486f4e45a0fc3af09040b851a84ad1cc684eeb79f9491c49c82 pid=2671 runtime=io.containerd.runc.v2 Sep 13 00:57:38.118835 kubelet[2221]: E0913 00:57:38.112333 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.118835 kubelet[2221]: W0913 00:57:38.112364 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.118835 kubelet[2221]: E0913 00:57:38.112396 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.118835 kubelet[2221]: E0913 00:57:38.112841 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.118835 kubelet[2221]: W0913 00:57:38.112861 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.118835 kubelet[2221]: E0913 00:57:38.112892 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.118835 kubelet[2221]: E0913 00:57:38.113289 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.118835 kubelet[2221]: W0913 00:57:38.113306 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.118835 kubelet[2221]: E0913 00:57:38.113336 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.120063 kubelet[2221]: E0913 00:57:38.119889 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.120063 kubelet[2221]: W0913 00:57:38.119913 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.120063 kubelet[2221]: E0913 00:57:38.119945 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.120377 kubelet[2221]: E0913 00:57:38.120357 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.120477 kubelet[2221]: W0913 00:57:38.120378 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.120547 kubelet[2221]: E0913 00:57:38.120519 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.120823 kubelet[2221]: E0913 00:57:38.120801 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.120823 kubelet[2221]: W0913 00:57:38.120822 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.120999 kubelet[2221]: E0913 00:57:38.120945 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.121353 kubelet[2221]: E0913 00:57:38.121317 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.121353 kubelet[2221]: W0913 00:57:38.121334 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.121506 kubelet[2221]: E0913 00:57:38.121467 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.121797 kubelet[2221]: E0913 00:57:38.121773 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.121797 kubelet[2221]: W0913 00:57:38.121793 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.121975 kubelet[2221]: E0913 00:57:38.121910 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.122179 kubelet[2221]: E0913 00:57:38.122132 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.122179 kubelet[2221]: W0913 00:57:38.122149 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.122337 kubelet[2221]: E0913 00:57:38.122271 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.123087 kubelet[2221]: E0913 00:57:38.122742 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.123087 kubelet[2221]: W0913 00:57:38.122804 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.123087 kubelet[2221]: E0913 00:57:38.122889 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.124124 kubelet[2221]: E0913 00:57:38.123938 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.124124 kubelet[2221]: W0913 00:57:38.123956 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.124124 kubelet[2221]: E0913 00:57:38.124090 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.124352 kubelet[2221]: E0913 00:57:38.124292 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.124352 kubelet[2221]: W0913 00:57:38.124307 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.124548 kubelet[2221]: E0913 00:57:38.124496 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.124646 kubelet[2221]: E0913 00:57:38.124596 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.124761 kubelet[2221]: W0913 00:57:38.124726 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.124888 kubelet[2221]: E0913 00:57:38.124864 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.125097 kubelet[2221]: E0913 00:57:38.125071 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.125097 kubelet[2221]: W0913 00:57:38.125091 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.125238 kubelet[2221]: E0913 00:57:38.125208 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.125423 kubelet[2221]: E0913 00:57:38.125402 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.125423 kubelet[2221]: W0913 00:57:38.125423 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.125582 kubelet[2221]: E0913 00:57:38.125547 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.125782 kubelet[2221]: E0913 00:57:38.125754 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.125782 kubelet[2221]: W0913 00:57:38.125774 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.125914 kubelet[2221]: E0913 00:57:38.125889 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.126846 kubelet[2221]: E0913 00:57:38.126208 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.126846 kubelet[2221]: W0913 00:57:38.126229 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.126846 kubelet[2221]: E0913 00:57:38.126347 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.126846 kubelet[2221]: E0913 00:57:38.126592 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.126846 kubelet[2221]: W0913 00:57:38.126603 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.126846 kubelet[2221]: E0913 00:57:38.126756 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.127229 kubelet[2221]: E0913 00:57:38.126964 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.127229 kubelet[2221]: W0913 00:57:38.126977 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.127229 kubelet[2221]: E0913 00:57:38.127100 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.127391 kubelet[2221]: E0913 00:57:38.127312 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.127391 kubelet[2221]: W0913 00:57:38.127324 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.127500 kubelet[2221]: E0913 00:57:38.127486 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.127795 kubelet[2221]: E0913 00:57:38.127762 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.127795 kubelet[2221]: W0913 00:57:38.127783 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.127966 kubelet[2221]: E0913 00:57:38.127946 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.128190 kubelet[2221]: E0913 00:57:38.128169 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.128190 kubelet[2221]: W0913 00:57:38.128191 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.128363 kubelet[2221]: E0913 00:57:38.128338 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.128588 kubelet[2221]: E0913 00:57:38.128568 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.128588 kubelet[2221]: W0913 00:57:38.128589 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.128769 kubelet[2221]: E0913 00:57:38.128750 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.130739 kubelet[2221]: E0913 00:57:38.130109 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.130739 kubelet[2221]: W0913 00:57:38.130258 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.130739 kubelet[2221]: E0913 00:57:38.130336 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.131508 kubelet[2221]: E0913 00:57:38.131133 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.131508 kubelet[2221]: W0913 00:57:38.131152 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.131508 kubelet[2221]: E0913 00:57:38.131208 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.146859 kubelet[2221]: E0913 00:57:38.146820 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:38.146859 kubelet[2221]: W0913 00:57:38.146853 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:38.147058 kubelet[2221]: E0913 00:57:38.146882 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:38.161424 env[1325]: time="2025-09-13T00:57:38.161364039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bczc8,Uid:b5d9e403-1092-42c3-bb01-86c5b0fc3bdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"194980b96db16486f4e45a0fc3af09040b851a84ad1cc684eeb79f9491c49c82\"" Sep 13 00:57:38.866282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293537041.mount: Deactivated successfully. Sep 13 00:57:39.719219 kubelet[2221]: E0913 00:57:39.719160 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjjfb" podUID="6497f54b-f081-4e3e-89dc-fe9b1d7d52c2" Sep 13 00:57:40.050107 env[1325]: time="2025-09-13T00:57:40.049520761Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:40.052579 env[1325]: time="2025-09-13T00:57:40.052462973Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:40.057155 env[1325]: time="2025-09-13T00:57:40.055982740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:40.059602 env[1325]: time="2025-09-13T00:57:40.058584843Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:40.060000 env[1325]: time="2025-09-13T00:57:40.059563113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:57:40.078003 env[1325]: time="2025-09-13T00:57:40.077942050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:57:40.099124 env[1325]: time="2025-09-13T00:57:40.099066278Z" level=info msg="CreateContainer within sandbox \"e22fe93a8a3c8275bb0741a37f86779bb2d92fe0a5c30f793c5e2cb47fca7fd0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:57:40.119946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126477183.mount: Deactivated successfully. Sep 13 00:57:40.125086 env[1325]: time="2025-09-13T00:57:40.125017747Z" level=info msg="CreateContainer within sandbox \"e22fe93a8a3c8275bb0741a37f86779bb2d92fe0a5c30f793c5e2cb47fca7fd0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"31a1ec1688401b3dc2b7988998f21ec617ac7fa0ee48bc8182bfa54f3f514550\"" Sep 13 00:57:40.127688 env[1325]: time="2025-09-13T00:57:40.127572259Z" level=info msg="StartContainer for \"31a1ec1688401b3dc2b7988998f21ec617ac7fa0ee48bc8182bfa54f3f514550\"" Sep 13 00:57:40.244667 env[1325]: time="2025-09-13T00:57:40.244575842Z" level=info msg="StartContainer for \"31a1ec1688401b3dc2b7988998f21ec617ac7fa0ee48bc8182bfa54f3f514550\" returns successfully" Sep 13 00:57:40.905354 kubelet[2221]: I0913 00:57:40.904737 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-796f7d58cf-mcpr8" podStartSLOduration=1.68799432 podStartE2EDuration="3.904709554s" podCreationTimestamp="2025-09-13 00:57:37 +0000 UTC" firstStartedPulling="2025-09-13 00:57:37.846234684 +0000 UTC m=+24.432728684" lastFinishedPulling="2025-09-13 00:57:40.062949924 +0000 UTC m=+26.649443918" observedRunningTime="2025-09-13 00:57:40.904654522 +0000 UTC m=+27.491148541" watchObservedRunningTime="2025-09-13 00:57:40.904709554 +0000 UTC m=+27.491203568" Sep 13 00:57:40.943137 kubelet[2221]: E0913 00:57:40.941871 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.943330 kubelet[2221]: W0913 00:57:40.943145 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.943330 kubelet[2221]: E0913 00:57:40.943218 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.943950 kubelet[2221]: E0913 00:57:40.943924 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.943950 kubelet[2221]: W0913 00:57:40.943949 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.944136 kubelet[2221]: E0913 00:57:40.943999 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.944428 kubelet[2221]: E0913 00:57:40.944404 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.944506 kubelet[2221]: W0913 00:57:40.944429 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.944506 kubelet[2221]: E0913 00:57:40.944450 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.944907 kubelet[2221]: E0913 00:57:40.944884 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.945013 kubelet[2221]: W0913 00:57:40.944906 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.945013 kubelet[2221]: E0913 00:57:40.944958 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.945389 kubelet[2221]: E0913 00:57:40.945365 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.945479 kubelet[2221]: W0913 00:57:40.945430 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.945479 kubelet[2221]: E0913 00:57:40.945451 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.945945 kubelet[2221]: E0913 00:57:40.945920 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.945945 kubelet[2221]: W0913 00:57:40.945944 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.946106 kubelet[2221]: E0913 00:57:40.945988 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.946341 kubelet[2221]: E0913 00:57:40.946320 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.946418 kubelet[2221]: W0913 00:57:40.946341 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.946418 kubelet[2221]: E0913 00:57:40.946359 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.946702 kubelet[2221]: E0913 00:57:40.946682 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.946802 kubelet[2221]: W0913 00:57:40.946702 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.946802 kubelet[2221]: E0913 00:57:40.946719 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.947033 kubelet[2221]: E0913 00:57:40.947013 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.947105 kubelet[2221]: W0913 00:57:40.947034 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.947105 kubelet[2221]: E0913 00:57:40.947054 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.947336 kubelet[2221]: E0913 00:57:40.947317 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.947412 kubelet[2221]: W0913 00:57:40.947337 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.947412 kubelet[2221]: E0913 00:57:40.947353 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.947703 kubelet[2221]: E0913 00:57:40.947682 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.947703 kubelet[2221]: W0913 00:57:40.947703 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.947863 kubelet[2221]: E0913 00:57:40.947719 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.948015 kubelet[2221]: E0913 00:57:40.947995 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.948080 kubelet[2221]: W0913 00:57:40.948016 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.948080 kubelet[2221]: E0913 00:57:40.948032 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.948355 kubelet[2221]: E0913 00:57:40.948335 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.948434 kubelet[2221]: W0913 00:57:40.948355 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.948434 kubelet[2221]: E0913 00:57:40.948372 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.948724 kubelet[2221]: E0913 00:57:40.948703 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.948808 kubelet[2221]: W0913 00:57:40.948724 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.948808 kubelet[2221]: E0913 00:57:40.948742 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.949073 kubelet[2221]: E0913 00:57:40.949052 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.949155 kubelet[2221]: W0913 00:57:40.949074 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.949155 kubelet[2221]: E0913 00:57:40.949092 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.949581 kubelet[2221]: E0913 00:57:40.949555 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.949726 kubelet[2221]: W0913 00:57:40.949676 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.949726 kubelet[2221]: E0913 00:57:40.949700 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.950194 kubelet[2221]: E0913 00:57:40.950172 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.950194 kubelet[2221]: W0913 00:57:40.950194 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.950334 kubelet[2221]: E0913 00:57:40.950217 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.950638 kubelet[2221]: E0913 00:57:40.950597 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.950744 kubelet[2221]: W0913 00:57:40.950646 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.950744 kubelet[2221]: E0913 00:57:40.950668 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.951068 kubelet[2221]: E0913 00:57:40.951045 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.951068 kubelet[2221]: W0913 00:57:40.951067 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.951208 kubelet[2221]: E0913 00:57:40.951090 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.951502 kubelet[2221]: E0913 00:57:40.951474 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.951502 kubelet[2221]: W0913 00:57:40.951494 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.951752 kubelet[2221]: E0913 00:57:40.951699 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.952135 kubelet[2221]: E0913 00:57:40.952114 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.952235 kubelet[2221]: W0913 00:57:40.952135 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.952235 kubelet[2221]: E0913 00:57:40.952209 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.952589 kubelet[2221]: E0913 00:57:40.952570 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.952706 kubelet[2221]: W0913 00:57:40.952590 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.952770 kubelet[2221]: E0913 00:57:40.952755 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.952973 kubelet[2221]: E0913 00:57:40.952953 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.953043 kubelet[2221]: W0913 00:57:40.952974 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.953043 kubelet[2221]: E0913 00:57:40.952996 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.953373 kubelet[2221]: E0913 00:57:40.953352 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.953451 kubelet[2221]: W0913 00:57:40.953374 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.953451 kubelet[2221]: E0913 00:57:40.953396 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.956037 kubelet[2221]: E0913 00:57:40.954816 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.956037 kubelet[2221]: W0913 00:57:40.954837 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.956037 kubelet[2221]: E0913 00:57:40.954855 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.956037 kubelet[2221]: E0913 00:57:40.955499 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.956037 kubelet[2221]: W0913 00:57:40.955518 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.956037 kubelet[2221]: E0913 00:57:40.955538 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.957473 kubelet[2221]: E0913 00:57:40.957443 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.957473 kubelet[2221]: W0913 00:57:40.957470 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.957746 kubelet[2221]: E0913 00:57:40.957491 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.958830 kubelet[2221]: E0913 00:57:40.958804 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.958830 kubelet[2221]: W0913 00:57:40.958829 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.958973 kubelet[2221]: E0913 00:57:40.958852 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.959397 kubelet[2221]: E0913 00:57:40.959373 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.959484 kubelet[2221]: W0913 00:57:40.959397 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.959484 kubelet[2221]: E0913 00:57:40.959432 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.961637 kubelet[2221]: E0913 00:57:40.961600 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.961750 kubelet[2221]: W0913 00:57:40.961653 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.961750 kubelet[2221]: E0913 00:57:40.961674 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.961975 kubelet[2221]: E0913 00:57:40.961953 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.962047 kubelet[2221]: W0913 00:57:40.961983 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.962047 kubelet[2221]: E0913 00:57:40.962001 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.962398 kubelet[2221]: E0913 00:57:40.962366 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.962485 kubelet[2221]: W0913 00:57:40.962449 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.962485 kubelet[2221]: E0913 00:57:40.962470 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:40.965921 kubelet[2221]: E0913 00:57:40.965891 2221 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:57:40.965921 kubelet[2221]: W0913 00:57:40.965919 2221 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:57:40.966094 kubelet[2221]: E0913 00:57:40.965940 2221 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:57:41.079544 env[1325]: time="2025-09-13T00:57:41.079490710Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:41.084184 env[1325]: time="2025-09-13T00:57:41.084132571Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:41.090058 env[1325]: time="2025-09-13T00:57:41.089998933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:41.098752 env[1325]: time="2025-09-13T00:57:41.098696164Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:41.101032 env[1325]: time="2025-09-13T00:57:41.100987788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:57:41.106026 env[1325]: time="2025-09-13T00:57:41.105971479Z" level=info msg="CreateContainer within sandbox \"194980b96db16486f4e45a0fc3af09040b851a84ad1cc684eeb79f9491c49c82\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:57:41.128257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007206942.mount: Deactivated successfully. Sep 13 00:57:41.131831 env[1325]: time="2025-09-13T00:57:41.131756659Z" level=info msg="CreateContainer within sandbox \"194980b96db16486f4e45a0fc3af09040b851a84ad1cc684eeb79f9491c49c82\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"91a5c99ffbcc42e58af526c5f6ec88137adf1797d77340dd77a9695bcc375eab\"" Sep 13 00:57:41.136589 env[1325]: time="2025-09-13T00:57:41.136537411Z" level=info msg="StartContainer for \"91a5c99ffbcc42e58af526c5f6ec88137adf1797d77340dd77a9695bcc375eab\"" Sep 13 00:57:41.251750 env[1325]: time="2025-09-13T00:57:41.251566670Z" level=info msg="StartContainer for \"91a5c99ffbcc42e58af526c5f6ec88137adf1797d77340dd77a9695bcc375eab\" returns successfully" Sep 13 00:57:41.718838 kubelet[2221]: E0913 00:57:41.718769 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjjfb" podUID="6497f54b-f081-4e3e-89dc-fe9b1d7d52c2" Sep 13 00:57:41.891576 kubelet[2221]: I0913 00:57:41.891522 2221 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:57:42.028321 env[1325]: time="2025-09-13T00:57:42.028141544Z" level=info msg="shim disconnected" id=91a5c99ffbcc42e58af526c5f6ec88137adf1797d77340dd77a9695bcc375eab Sep 13 00:57:42.028321 env[1325]: time="2025-09-13T00:57:42.028206406Z" level=warning msg="cleaning up after shim disconnected" id=91a5c99ffbcc42e58af526c5f6ec88137adf1797d77340dd77a9695bcc375eab namespace=k8s.io Sep 13 00:57:42.028321 env[1325]: time="2025-09-13T00:57:42.028222806Z" level=info msg="cleaning up dead shim" Sep 13 00:57:42.045159 env[1325]: time="2025-09-13T00:57:42.045092507Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2866 runtime=io.containerd.runc.v2\n" Sep 13 00:57:42.077845 systemd[1]: run-containerd-runc-k8s.io-91a5c99ffbcc42e58af526c5f6ec88137adf1797d77340dd77a9695bcc375eab-runc.3TNGfi.mount: Deactivated successfully. Sep 13 00:57:42.078087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91a5c99ffbcc42e58af526c5f6ec88137adf1797d77340dd77a9695bcc375eab-rootfs.mount: Deactivated successfully. Sep 13 00:57:42.897603 env[1325]: time="2025-09-13T00:57:42.897495276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:57:43.718754 kubelet[2221]: E0913 00:57:43.718488 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjjfb" podUID="6497f54b-f081-4e3e-89dc-fe9b1d7d52c2" Sep 13 00:57:45.721648 kubelet[2221]: E0913 00:57:45.721309 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjjfb" podUID="6497f54b-f081-4e3e-89dc-fe9b1d7d52c2" Sep 13 00:57:46.351815 env[1325]: time="2025-09-13T00:57:46.351744786Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:46.354121 env[1325]: time="2025-09-13T00:57:46.354042794Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:46.359724 env[1325]: time="2025-09-13T00:57:46.359681504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:46.362243 env[1325]: time="2025-09-13T00:57:46.362202196Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:46.362808 env[1325]: time="2025-09-13T00:57:46.362765549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:57:46.367662 env[1325]: time="2025-09-13T00:57:46.367590588Z" level=info msg="CreateContainer within sandbox \"194980b96db16486f4e45a0fc3af09040b851a84ad1cc684eeb79f9491c49c82\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:57:46.389238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182968130.mount: Deactivated successfully. Sep 13 00:57:46.396466 env[1325]: time="2025-09-13T00:57:46.396389256Z" level=info msg="CreateContainer within sandbox \"194980b96db16486f4e45a0fc3af09040b851a84ad1cc684eeb79f9491c49c82\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dab81dd746ffe361d353da9aed8cacdd9bc35b1b1d9c04b5bddb66c8516d68ab\"" Sep 13 00:57:46.399140 env[1325]: time="2025-09-13T00:57:46.397142809Z" level=info msg="StartContainer for \"dab81dd746ffe361d353da9aed8cacdd9bc35b1b1d9c04b5bddb66c8516d68ab\"" Sep 13 00:57:46.492178 env[1325]: time="2025-09-13T00:57:46.492101728Z" level=info msg="StartContainer for \"dab81dd746ffe361d353da9aed8cacdd9bc35b1b1d9c04b5bddb66c8516d68ab\" returns successfully" Sep 13 00:57:47.273193 kubelet[2221]: I0913 00:57:47.273137 2221 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:57:47.316000 audit[2913]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:47.338675 kernel: kauditd_printk_skb: 8 callbacks suppressed Sep 13 00:57:47.338805 kernel: audit: type=1325 audit(1757725067.316:280): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:47.338854 kernel: audit: type=1300 audit(1757725067.316:280): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffabcd17b0 a2=0 a3=7fffabcd179c items=0 ppid=2341 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:47.316000 audit[2913]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffabcd17b0 a2=0 a3=7fffabcd179c items=0 ppid=2341 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:47.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:47.388643 kernel: audit: type=1327 audit(1757725067.316:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:47.372000 audit[2913]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:47.405638 kernel: audit: type=1325 audit(1757725067.372:281): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:57:47.372000 audit[2913]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffabcd17b0 a2=0 a3=7fffabcd179c items=0 ppid=2341 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:47.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:47.455364 kernel: audit: type=1300 audit(1757725067.372:281): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffabcd17b0 a2=0 a3=7fffabcd179c items=0 ppid=2341 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:47.455542 kernel: audit: type=1327 audit(1757725067.372:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:57:47.650111 env[1325]: time="2025-09-13T00:57:47.649973405Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:57:47.679738 kubelet[2221]: I0913 00:57:47.679190 2221 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:57:47.734272 env[1325]: time="2025-09-13T00:57:47.725349506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjjfb,Uid:6497f54b-f081-4e3e-89dc-fe9b1d7d52c2,Namespace:calico-system,Attempt:0,}" Sep 13 00:57:47.751224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dab81dd746ffe361d353da9aed8cacdd9bc35b1b1d9c04b5bddb66c8516d68ab-rootfs.mount: Deactivated successfully. Sep 13 00:57:47.811475 kubelet[2221]: I0913 00:57:47.808921 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q287\" (UniqueName: \"kubernetes.io/projected/02a91fdd-1f7d-4977-ad95-07ea1dc01154-kube-api-access-6q287\") pod \"coredns-7c65d6cfc9-x5h4h\" (UID: \"02a91fdd-1f7d-4977-ad95-07ea1dc01154\") " pod="kube-system/coredns-7c65d6cfc9-x5h4h" Sep 13 00:57:47.811475 kubelet[2221]: I0913 00:57:47.809001 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pdg\" (UniqueName: \"kubernetes.io/projected/c749baa2-250c-406e-806c-5781eafb74e7-kube-api-access-d4pdg\") pod \"calico-apiserver-649475f784-nsdkw\" (UID: \"c749baa2-250c-406e-806c-5781eafb74e7\") " pod="calico-apiserver/calico-apiserver-649475f784-nsdkw" Sep 13 00:57:47.811475 kubelet[2221]: I0913 00:57:47.809055 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02a91fdd-1f7d-4977-ad95-07ea1dc01154-config-volume\") pod \"coredns-7c65d6cfc9-x5h4h\" (UID: \"02a91fdd-1f7d-4977-ad95-07ea1dc01154\") " pod="kube-system/coredns-7c65d6cfc9-x5h4h" Sep 13 00:57:47.811475 kubelet[2221]: I0913 00:57:47.809091 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6z8v\" (UniqueName: \"kubernetes.io/projected/820afca3-f77c-4bac-b219-c18864653831-kube-api-access-f6z8v\") pod \"coredns-7c65d6cfc9-28jg5\" (UID: \"820afca3-f77c-4bac-b219-c18864653831\") " pod="kube-system/coredns-7c65d6cfc9-28jg5" Sep 13 00:57:47.811475 kubelet[2221]: I0913 00:57:47.809136 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgqwc\" (UniqueName: \"kubernetes.io/projected/7a8637c4-413d-4e61-bef0-740ff2360374-kube-api-access-cgqwc\") pod \"goldmane-7988f88666-mqrbm\" (UID: \"7a8637c4-413d-4e61-bef0-740ff2360374\") " pod="calico-system/goldmane-7988f88666-mqrbm" Sep 13 00:57:47.812116 kubelet[2221]: I0913 00:57:47.809168 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvk4\" (UniqueName: \"kubernetes.io/projected/5482a9ae-642d-42d0-b694-214ca0591875-kube-api-access-rvvk4\") pod \"calico-apiserver-649475f784-qqkdv\" (UID: \"5482a9ae-642d-42d0-b694-214ca0591875\") " pod="calico-apiserver/calico-apiserver-649475f784-qqkdv" Sep 13 00:57:47.812116 kubelet[2221]: I0913 00:57:47.809221 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8637c4-413d-4e61-bef0-740ff2360374-config\") pod \"goldmane-7988f88666-mqrbm\" (UID: \"7a8637c4-413d-4e61-bef0-740ff2360374\") " pod="calico-system/goldmane-7988f88666-mqrbm" Sep 13 00:57:47.812116 kubelet[2221]: I0913 00:57:47.809249 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a8637c4-413d-4e61-bef0-740ff2360374-goldmane-ca-bundle\") pod \"goldmane-7988f88666-mqrbm\" (UID: \"7a8637c4-413d-4e61-bef0-740ff2360374\") " pod="calico-system/goldmane-7988f88666-mqrbm" Sep 13 00:57:47.812116 kubelet[2221]: I0913 00:57:47.809298 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7a8637c4-413d-4e61-bef0-740ff2360374-goldmane-key-pair\") pod \"goldmane-7988f88666-mqrbm\" (UID: \"7a8637c4-413d-4e61-bef0-740ff2360374\") " pod="calico-system/goldmane-7988f88666-mqrbm" Sep 13 00:57:47.812116 kubelet[2221]: I0913 00:57:47.809330 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eca60e6b-177b-4588-8e7f-a2dc081264e1-tigera-ca-bundle\") pod \"calico-kube-controllers-b845c7695-sr7sp\" (UID: \"eca60e6b-177b-4588-8e7f-a2dc081264e1\") " pod="calico-system/calico-kube-controllers-b845c7695-sr7sp" Sep 13 00:57:47.812317 kubelet[2221]: I0913 00:57:47.809377 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/820afca3-f77c-4bac-b219-c18864653831-config-volume\") pod \"coredns-7c65d6cfc9-28jg5\" (UID: \"820afca3-f77c-4bac-b219-c18864653831\") " pod="kube-system/coredns-7c65d6cfc9-28jg5" Sep 13 00:57:47.812317 kubelet[2221]: I0913 00:57:47.809447 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c749baa2-250c-406e-806c-5781eafb74e7-calico-apiserver-certs\") pod \"calico-apiserver-649475f784-nsdkw\" (UID: \"c749baa2-250c-406e-806c-5781eafb74e7\") " pod="calico-apiserver/calico-apiserver-649475f784-nsdkw" Sep 13 00:57:47.812317 kubelet[2221]: I0913 00:57:47.809480 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqb5\" (UniqueName: \"kubernetes.io/projected/61c908e5-28ce-465e-af5e-98054050fb03-kube-api-access-mvqb5\") pod \"whisker-bd8b687f-9jrzd\" (UID: \"61c908e5-28ce-465e-af5e-98054050fb03\") " pod="calico-system/whisker-bd8b687f-9jrzd" Sep 13 00:57:47.812317 kubelet[2221]: I0913 00:57:47.809532 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94tk2\" (UniqueName: \"kubernetes.io/projected/eca60e6b-177b-4588-8e7f-a2dc081264e1-kube-api-access-94tk2\") pod \"calico-kube-controllers-b845c7695-sr7sp\" (UID: \"eca60e6b-177b-4588-8e7f-a2dc081264e1\") " pod="calico-system/calico-kube-controllers-b845c7695-sr7sp" Sep 13 00:57:47.812317 kubelet[2221]: I0913 00:57:47.809564 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5482a9ae-642d-42d0-b694-214ca0591875-calico-apiserver-certs\") pod \"calico-apiserver-649475f784-qqkdv\" (UID: \"5482a9ae-642d-42d0-b694-214ca0591875\") " pod="calico-apiserver/calico-apiserver-649475f784-qqkdv" Sep 13 00:57:47.812485 kubelet[2221]: I0913 00:57:47.809623 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61c908e5-28ce-465e-af5e-98054050fb03-whisker-backend-key-pair\") pod \"whisker-bd8b687f-9jrzd\" (UID: \"61c908e5-28ce-465e-af5e-98054050fb03\") " pod="calico-system/whisker-bd8b687f-9jrzd" Sep 13 00:57:47.812485 kubelet[2221]: I0913 00:57:47.809655 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61c908e5-28ce-465e-af5e-98054050fb03-whisker-ca-bundle\") pod \"whisker-bd8b687f-9jrzd\" (UID: \"61c908e5-28ce-465e-af5e-98054050fb03\") " pod="calico-system/whisker-bd8b687f-9jrzd" Sep 13 00:57:48.072784 env[1325]: time="2025-09-13T00:57:48.072405100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-mqrbm,Uid:7a8637c4-413d-4e61-bef0-740ff2360374,Namespace:calico-system,Attempt:0,}" Sep 13 00:57:48.080658 env[1325]: time="2025-09-13T00:57:48.080465662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-28jg5,Uid:820afca3-f77c-4bac-b219-c18864653831,Namespace:kube-system,Attempt:0,}" Sep 13 00:57:48.082662 env[1325]: time="2025-09-13T00:57:48.082580616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649475f784-nsdkw,Uid:c749baa2-250c-406e-806c-5781eafb74e7,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:57:48.089653 env[1325]: time="2025-09-13T00:57:48.089348634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x5h4h,Uid:02a91fdd-1f7d-4977-ad95-07ea1dc01154,Namespace:kube-system,Attempt:0,}" Sep 13 00:57:48.090644 env[1325]: time="2025-09-13T00:57:48.090558189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b845c7695-sr7sp,Uid:eca60e6b-177b-4588-8e7f-a2dc081264e1,Namespace:calico-system,Attempt:0,}" Sep 13 00:57:48.091153 env[1325]: time="2025-09-13T00:57:48.091112326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd8b687f-9jrzd,Uid:61c908e5-28ce-465e-af5e-98054050fb03,Namespace:calico-system,Attempt:0,}" Sep 13 00:57:48.092068 env[1325]: time="2025-09-13T00:57:48.091999350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649475f784-qqkdv,Uid:5482a9ae-642d-42d0-b694-214ca0591875,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:57:48.316522 env[1325]: time="2025-09-13T00:57:48.316457493Z" level=info msg="shim disconnected" id=dab81dd746ffe361d353da9aed8cacdd9bc35b1b1d9c04b5bddb66c8516d68ab Sep 13 00:57:48.316522 env[1325]: time="2025-09-13T00:57:48.316524104Z" level=warning msg="cleaning up after shim disconnected" id=dab81dd746ffe361d353da9aed8cacdd9bc35b1b1d9c04b5bddb66c8516d68ab namespace=k8s.io Sep 13 00:57:48.316933 env[1325]: time="2025-09-13T00:57:48.316540659Z" level=info msg="cleaning up dead shim" Sep 13 00:57:48.333158 env[1325]: time="2025-09-13T00:57:48.333024212Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2953 runtime=io.containerd.runc.v2\n" Sep 13 00:57:48.704745 env[1325]: time="2025-09-13T00:57:48.704629336Z" level=error msg="Failed to destroy network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.705401 env[1325]: time="2025-09-13T00:57:48.705200067Z" level=error msg="encountered an error cleaning up failed sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.705401 env[1325]: time="2025-09-13T00:57:48.705294844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-28jg5,Uid:820afca3-f77c-4bac-b219-c18864653831,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.706570 kubelet[2221]: E0913 00:57:48.705816 2221 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.706570 kubelet[2221]: E0913 00:57:48.705966 2221 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-28jg5" Sep 13 00:57:48.706570 kubelet[2221]: E0913 00:57:48.706029 2221 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-28jg5" Sep 13 00:57:48.707291 kubelet[2221]: E0913 00:57:48.706115 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-28jg5_kube-system(820afca3-f77c-4bac-b219-c18864653831)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-28jg5_kube-system(820afca3-f77c-4bac-b219-c18864653831)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-28jg5" podUID="820afca3-f77c-4bac-b219-c18864653831" Sep 13 00:57:48.807885 env[1325]: time="2025-09-13T00:57:48.807805298Z" level=error msg="Failed to destroy network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.809021 env[1325]: time="2025-09-13T00:57:48.808584490Z" level=error msg="encountered an error cleaning up failed sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.809021 env[1325]: time="2025-09-13T00:57:48.808800197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjjfb,Uid:6497f54b-f081-4e3e-89dc-fe9b1d7d52c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.809021 env[1325]: time="2025-09-13T00:57:48.808117372Z" level=error msg="Failed to destroy network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.812971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45-shm.mount: Deactivated successfully. Sep 13 00:57:48.813224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66-shm.mount: Deactivated successfully. Sep 13 00:57:48.820032 kubelet[2221]: E0913 00:57:48.819264 2221 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.820032 kubelet[2221]: E0913 00:57:48.819421 2221 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bjjfb" Sep 13 00:57:48.820032 kubelet[2221]: E0913 00:57:48.819474 2221 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bjjfb" Sep 13 00:57:48.823739 env[1325]: time="2025-09-13T00:57:48.819502263Z" level=error msg="encountered an error cleaning up failed sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.823739 env[1325]: time="2025-09-13T00:57:48.819596009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649475f784-nsdkw,Uid:c749baa2-250c-406e-806c-5781eafb74e7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.823948 kubelet[2221]: E0913 00:57:48.819569 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bjjfb_calico-system(6497f54b-f081-4e3e-89dc-fe9b1d7d52c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bjjfb_calico-system(6497f54b-f081-4e3e-89dc-fe9b1d7d52c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bjjfb" podUID="6497f54b-f081-4e3e-89dc-fe9b1d7d52c2" Sep 13 00:57:48.825634 kubelet[2221]: E0913 00:57:48.825287 2221 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.825634 kubelet[2221]: E0913 00:57:48.825383 2221 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649475f784-nsdkw" Sep 13 00:57:48.825634 kubelet[2221]: E0913 00:57:48.825432 2221 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649475f784-nsdkw" Sep 13 00:57:48.825894 kubelet[2221]: E0913 00:57:48.825523 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-649475f784-nsdkw_calico-apiserver(c749baa2-250c-406e-806c-5781eafb74e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-649475f784-nsdkw_calico-apiserver(c749baa2-250c-406e-806c-5781eafb74e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649475f784-nsdkw" podUID="c749baa2-250c-406e-806c-5781eafb74e7" Sep 13 00:57:48.855132 env[1325]: time="2025-09-13T00:57:48.855046876Z" level=error msg="Failed to destroy network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.860578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f-shm.mount: Deactivated successfully. Sep 13 00:57:48.867698 env[1325]: time="2025-09-13T00:57:48.867625567Z" level=error msg="encountered an error cleaning up failed sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.867987 env[1325]: time="2025-09-13T00:57:48.867930026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-mqrbm,Uid:7a8637c4-413d-4e61-bef0-740ff2360374,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.869121 kubelet[2221]: E0913 00:57:48.868463 2221 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.869121 kubelet[2221]: E0913 00:57:48.868559 2221 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-mqrbm" Sep 13 00:57:48.869121 kubelet[2221]: E0913 00:57:48.868592 2221 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-mqrbm" Sep 13 00:57:48.870942 kubelet[2221]: E0913 00:57:48.868688 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-mqrbm_calico-system(7a8637c4-413d-4e61-bef0-740ff2360374)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-mqrbm_calico-system(7a8637c4-413d-4e61-bef0-740ff2360374)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-mqrbm" podUID="7a8637c4-413d-4e61-bef0-740ff2360374" Sep 13 00:57:48.908388 env[1325]: time="2025-09-13T00:57:48.908313925Z" level=error msg="Failed to destroy network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.913333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce-shm.mount: Deactivated successfully. Sep 13 00:57:48.916294 env[1325]: time="2025-09-13T00:57:48.916226911Z" level=error msg="encountered an error cleaning up failed sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.916564 env[1325]: time="2025-09-13T00:57:48.916507059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649475f784-qqkdv,Uid:5482a9ae-642d-42d0-b694-214ca0591875,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.917771 kubelet[2221]: E0913 00:57:48.917110 2221 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.917771 kubelet[2221]: E0913 00:57:48.917198 2221 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649475f784-qqkdv" Sep 13 00:57:48.917771 kubelet[2221]: E0913 00:57:48.917263 2221 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649475f784-qqkdv" Sep 13 00:57:48.918043 kubelet[2221]: E0913 00:57:48.917351 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-649475f784-qqkdv_calico-apiserver(5482a9ae-642d-42d0-b694-214ca0591875)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-649475f784-qqkdv_calico-apiserver(5482a9ae-642d-42d0-b694-214ca0591875)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649475f784-qqkdv" podUID="5482a9ae-642d-42d0-b694-214ca0591875" Sep 13 00:57:48.930956 env[1325]: time="2025-09-13T00:57:48.930861986Z" level=error msg="Failed to destroy network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.932266 env[1325]: time="2025-09-13T00:57:48.932207185Z" level=error msg="encountered an error cleaning up failed sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.932513 env[1325]: time="2025-09-13T00:57:48.932457447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x5h4h,Uid:02a91fdd-1f7d-4977-ad95-07ea1dc01154,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.933599 kubelet[2221]: E0913 00:57:48.932978 2221 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:48.933599 kubelet[2221]: E0913 00:57:48.933073 2221 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-x5h4h" Sep 13 00:57:48.933599 kubelet[2221]: E0913 00:57:48.933124 2221 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-x5h4h" Sep 13 00:57:48.933889 kubelet[2221]: E0913 00:57:48.933218 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-x5h4h_kube-system(02a91fdd-1f7d-4977-ad95-07ea1dc01154)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-x5h4h_kube-system(02a91fdd-1f7d-4977-ad95-07ea1dc01154)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-x5h4h" podUID="02a91fdd-1f7d-4977-ad95-07ea1dc01154" Sep 13 00:57:48.960802 env[1325]: time="2025-09-13T00:57:48.955328773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:57:48.960997 kubelet[2221]: I0913 00:57:48.955980 2221 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:57:48.966481 env[1325]: time="2025-09-13T00:57:48.959934440Z" level=info msg="StopPodSandbox for \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\"" Sep 13 00:57:48.981857 kubelet[2221]: I0913 00:57:48.978258 2221 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:57:48.982078 env[1325]: time="2025-09-13T00:57:48.979665398Z" level=info msg="StopPodSandbox for \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\"" Sep 13 00:57:48.984795 kubelet[2221]: I0913 00:57:48.983977 2221 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:57:48.985554 env[1325]: time="2025-09-13T00:57:48.985512132Z" level=info msg="StopPodSandbox for \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\"" Sep 13 00:57:48.990838 kubelet[2221]: I0913 00:57:48.989852 2221 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:57:48.991298 env[1325]: time="2025-09-13T00:57:48.991246556Z" level=info msg="StopPodSandbox for \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\"" Sep 13 00:57:48.994700 kubelet[2221]: I0913 00:57:48.993543 2221 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:57:48.995170 env[1325]: time="2025-09-13T00:57:48.995108263Z" level=info msg="StopPodSandbox for \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\"" Sep 13 00:57:48.998172 kubelet[2221]: I0913 00:57:48.997355 2221 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:57:48.998538 env[1325]: time="2025-09-13T00:57:48.998497883Z" level=info msg="StopPodSandbox for \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\"" Sep 13 00:57:49.009127 env[1325]: time="2025-09-13T00:57:49.009054683Z" level=error msg="Failed to destroy network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.009986 env[1325]: time="2025-09-13T00:57:49.009926424Z" level=error msg="encountered an error cleaning up failed sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.010284 env[1325]: time="2025-09-13T00:57:49.010232951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b845c7695-sr7sp,Uid:eca60e6b-177b-4588-8e7f-a2dc081264e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.013321 kubelet[2221]: E0913 00:57:49.010814 2221 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.013321 kubelet[2221]: E0913 00:57:49.010907 2221 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b845c7695-sr7sp" Sep 13 00:57:49.013321 kubelet[2221]: E0913 00:57:49.010968 2221 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b845c7695-sr7sp" Sep 13 00:57:49.013767 env[1325]: time="2025-09-13T00:57:49.011771403Z" level=error msg="Failed to destroy network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.013767 env[1325]: time="2025-09-13T00:57:49.012434481Z" level=error msg="encountered an error cleaning up failed sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.013767 env[1325]: time="2025-09-13T00:57:49.012528041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd8b687f-9jrzd,Uid:61c908e5-28ce-465e-af5e-98054050fb03,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.013968 kubelet[2221]: E0913 00:57:49.011043 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b845c7695-sr7sp_calico-system(eca60e6b-177b-4588-8e7f-a2dc081264e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b845c7695-sr7sp_calico-system(eca60e6b-177b-4588-8e7f-a2dc081264e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b845c7695-sr7sp" podUID="eca60e6b-177b-4588-8e7f-a2dc081264e1" Sep 13 00:57:49.022053 kubelet[2221]: E0913 00:57:49.019722 2221 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.022053 kubelet[2221]: E0913 00:57:49.019806 2221 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd8b687f-9jrzd" Sep 13 00:57:49.022053 kubelet[2221]: E0913 00:57:49.019860 2221 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd8b687f-9jrzd" Sep 13 00:57:49.022334 kubelet[2221]: E0913 00:57:49.019985 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bd8b687f-9jrzd_calico-system(61c908e5-28ce-465e-af5e-98054050fb03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bd8b687f-9jrzd_calico-system(61c908e5-28ce-465e-af5e-98054050fb03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd8b687f-9jrzd" podUID="61c908e5-28ce-465e-af5e-98054050fb03" Sep 13 00:57:49.156765 env[1325]: time="2025-09-13T00:57:49.156685366Z" level=error msg="StopPodSandbox for \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\" failed" error="failed to destroy network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.157634 kubelet[2221]: E0913 00:57:49.157328 2221 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:57:49.157634 kubelet[2221]: E0913 00:57:49.157408 2221 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12"} Sep 13 00:57:49.157634 kubelet[2221]: E0913 00:57:49.157491 2221 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02a91fdd-1f7d-4977-ad95-07ea1dc01154\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:57:49.157634 kubelet[2221]: E0913 00:57:49.157545 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02a91fdd-1f7d-4977-ad95-07ea1dc01154\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-x5h4h" podUID="02a91fdd-1f7d-4977-ad95-07ea1dc01154" Sep 13 00:57:49.162474 env[1325]: time="2025-09-13T00:57:49.162403745Z" level=error msg="StopPodSandbox for \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\" failed" error="failed to destroy network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.163213 kubelet[2221]: E0913 00:57:49.162969 2221 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:57:49.163213 kubelet[2221]: E0913 00:57:49.163038 2221 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45"} Sep 13 00:57:49.163213 kubelet[2221]: E0913 00:57:49.163103 2221 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c749baa2-250c-406e-806c-5781eafb74e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:57:49.163213 kubelet[2221]: E0913 00:57:49.163141 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c749baa2-250c-406e-806c-5781eafb74e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649475f784-nsdkw" podUID="c749baa2-250c-406e-806c-5781eafb74e7" Sep 13 00:57:49.184775 env[1325]: time="2025-09-13T00:57:49.184696749Z" level=error msg="StopPodSandbox for \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\" failed" error="failed to destroy network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.185545 kubelet[2221]: E0913 00:57:49.185327 2221 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:57:49.185545 kubelet[2221]: E0913 00:57:49.185393 2221 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d"} Sep 13 00:57:49.185545 kubelet[2221]: E0913 00:57:49.185441 2221 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"820afca3-f77c-4bac-b219-c18864653831\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:57:49.185545 kubelet[2221]: E0913 00:57:49.185476 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"820afca3-f77c-4bac-b219-c18864653831\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-28jg5" podUID="820afca3-f77c-4bac-b219-c18864653831" Sep 13 00:57:49.191038 env[1325]: time="2025-09-13T00:57:49.190961181Z" level=error msg="StopPodSandbox for \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\" failed" error="failed to destroy network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.191433 kubelet[2221]: E0913 00:57:49.191362 2221 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:57:49.191552 kubelet[2221]: E0913 00:57:49.191456 2221 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66"} Sep 13 00:57:49.191552 kubelet[2221]: E0913 00:57:49.191528 2221 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:57:49.191761 kubelet[2221]: E0913 00:57:49.191563 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bjjfb" podUID="6497f54b-f081-4e3e-89dc-fe9b1d7d52c2" Sep 13 00:57:49.199917 env[1325]: time="2025-09-13T00:57:49.199827058Z" level=error msg="StopPodSandbox for \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\" failed" error="failed to destroy network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.200315 kubelet[2221]: E0913 00:57:49.200195 2221 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:57:49.200315 kubelet[2221]: E0913 00:57:49.200265 2221 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce"} Sep 13 00:57:49.200527 kubelet[2221]: E0913 00:57:49.200323 2221 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5482a9ae-642d-42d0-b694-214ca0591875\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:57:49.200527 kubelet[2221]: E0913 00:57:49.200360 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5482a9ae-642d-42d0-b694-214ca0591875\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649475f784-qqkdv" podUID="5482a9ae-642d-42d0-b694-214ca0591875" Sep 13 00:57:49.201646 env[1325]: time="2025-09-13T00:57:49.201560302Z" level=error msg="StopPodSandbox for \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\" failed" error="failed to destroy network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:49.202091 kubelet[2221]: E0913 00:57:49.202022 2221 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:57:49.202091 kubelet[2221]: E0913 00:57:49.202087 2221 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f"} Sep 13 00:57:49.202275 kubelet[2221]: E0913 00:57:49.202145 2221 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a8637c4-413d-4e61-bef0-740ff2360374\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:57:49.202275 kubelet[2221]: E0913 00:57:49.202182 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a8637c4-413d-4e61-bef0-740ff2360374\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-mqrbm" podUID="7a8637c4-413d-4e61-bef0-740ff2360374" Sep 13 00:57:49.758464 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30-shm.mount: Deactivated successfully. Sep 13 00:57:49.758739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87-shm.mount: Deactivated successfully. Sep 13 00:57:49.758916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12-shm.mount: Deactivated successfully. Sep 13 00:57:50.001659 kubelet[2221]: I0913 00:57:50.001415 2221 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:57:50.003464 env[1325]: time="2025-09-13T00:57:50.003416347Z" level=info msg="StopPodSandbox for \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\"" Sep 13 00:57:50.023447 kubelet[2221]: I0913 00:57:50.022800 2221 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:57:50.023776 env[1325]: time="2025-09-13T00:57:50.023730427Z" level=info msg="StopPodSandbox for \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\"" Sep 13 00:57:50.096027 env[1325]: time="2025-09-13T00:57:50.095932338Z" level=error msg="StopPodSandbox for \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\" failed" error="failed to destroy network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:50.096292 kubelet[2221]: E0913 00:57:50.096236 2221 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:57:50.096410 kubelet[2221]: E0913 00:57:50.096312 2221 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87"} Sep 13 00:57:50.096410 kubelet[2221]: E0913 00:57:50.096364 2221 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61c908e5-28ce-465e-af5e-98054050fb03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:57:50.096596 kubelet[2221]: E0913 00:57:50.096402 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61c908e5-28ce-465e-af5e-98054050fb03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd8b687f-9jrzd" podUID="61c908e5-28ce-465e-af5e-98054050fb03" Sep 13 00:57:50.121548 env[1325]: time="2025-09-13T00:57:50.121470846Z" level=error msg="StopPodSandbox for \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\" failed" error="failed to destroy network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:57:50.121853 kubelet[2221]: E0913 00:57:50.121796 2221 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:57:50.121990 kubelet[2221]: E0913 00:57:50.121862 2221 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30"} Sep 13 00:57:50.121990 kubelet[2221]: E0913 00:57:50.121923 2221 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eca60e6b-177b-4588-8e7f-a2dc081264e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:57:50.121990 kubelet[2221]: E0913 00:57:50.121959 2221 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eca60e6b-177b-4588-8e7f-a2dc081264e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b845c7695-sr7sp" podUID="eca60e6b-177b-4588-8e7f-a2dc081264e1" Sep 13 00:57:56.084431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351351414.mount: Deactivated successfully. Sep 13 00:57:56.118359 env[1325]: time="2025-09-13T00:57:56.118287207Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:56.121118 env[1325]: time="2025-09-13T00:57:56.121071357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:56.123072 env[1325]: time="2025-09-13T00:57:56.123020277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:56.126077 env[1325]: time="2025-09-13T00:57:56.126036111Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:56.126740 env[1325]: time="2025-09-13T00:57:56.126687703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:57:56.153779 env[1325]: time="2025-09-13T00:57:56.148862624Z" level=info msg="CreateContainer within sandbox \"194980b96db16486f4e45a0fc3af09040b851a84ad1cc684eeb79f9491c49c82\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:57:56.174576 env[1325]: time="2025-09-13T00:57:56.174468515Z" level=info msg="CreateContainer within sandbox \"194980b96db16486f4e45a0fc3af09040b851a84ad1cc684eeb79f9491c49c82\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9252380b4d39f4e3c6af2a1f27be3344e127336b6bb7930869c9a33eb5c8dc72\"" Sep 13 00:57:56.175389 env[1325]: time="2025-09-13T00:57:56.175347666Z" level=info msg="StartContainer for \"9252380b4d39f4e3c6af2a1f27be3344e127336b6bb7930869c9a33eb5c8dc72\"" Sep 13 00:57:56.263250 env[1325]: time="2025-09-13T00:57:56.263188450Z" level=info msg="StartContainer for \"9252380b4d39f4e3c6af2a1f27be3344e127336b6bb7930869c9a33eb5c8dc72\" returns successfully" Sep 13 00:57:56.399848 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:57:56.400056 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:57:56.535504 env[1325]: time="2025-09-13T00:57:56.535436009Z" level=info msg="StopPodSandbox for \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\"" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.686 [INFO][3376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.687 [INFO][3376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" iface="eth0" netns="/var/run/netns/cni-4c4df73b-53c7-2141-b22e-ed7e6737f373" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.687 [INFO][3376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" iface="eth0" netns="/var/run/netns/cni-4c4df73b-53c7-2141-b22e-ed7e6737f373" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.687 [INFO][3376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" iface="eth0" netns="/var/run/netns/cni-4c4df73b-53c7-2141-b22e-ed7e6737f373" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.687 [INFO][3376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.687 [INFO][3376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.731 [INFO][3384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" HandleID="k8s-pod-network.1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.732 [INFO][3384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.732 [INFO][3384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.741 [WARNING][3384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" HandleID="k8s-pod-network.1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.742 [INFO][3384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" HandleID="k8s-pod-network.1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.744 [INFO][3384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:57:56.751107 env[1325]: 2025-09-13 00:57:56.748 [INFO][3376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:57:56.752428 env[1325]: time="2025-09-13T00:57:56.752369071Z" level=info msg="TearDown network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\" successfully" Sep 13 00:57:56.752659 env[1325]: time="2025-09-13T00:57:56.752572837Z" level=info msg="StopPodSandbox for \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\" returns successfully" Sep 13 00:57:56.804894 kubelet[2221]: I0913 00:57:56.804837 2221 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61c908e5-28ce-465e-af5e-98054050fb03-whisker-ca-bundle\") pod \"61c908e5-28ce-465e-af5e-98054050fb03\" (UID: \"61c908e5-28ce-465e-af5e-98054050fb03\") " Sep 13 00:57:56.805724 kubelet[2221]: I0913 00:57:56.805697 2221 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvqb5\" (UniqueName: \"kubernetes.io/projected/61c908e5-28ce-465e-af5e-98054050fb03-kube-api-access-mvqb5\") pod \"61c908e5-28ce-465e-af5e-98054050fb03\" (UID: \"61c908e5-28ce-465e-af5e-98054050fb03\") " Sep 13 00:57:56.806556 kubelet[2221]: I0913 00:57:56.806510 2221 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61c908e5-28ce-465e-af5e-98054050fb03-whisker-backend-key-pair\") pod \"61c908e5-28ce-465e-af5e-98054050fb03\" (UID: \"61c908e5-28ce-465e-af5e-98054050fb03\") " Sep 13 00:57:56.806841 kubelet[2221]: I0913 00:57:56.806421 2221 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61c908e5-28ce-465e-af5e-98054050fb03-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "61c908e5-28ce-465e-af5e-98054050fb03" (UID: "61c908e5-28ce-465e-af5e-98054050fb03"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:57:56.807642 kubelet[2221]: I0913 00:57:56.807615 2221 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61c908e5-28ce-465e-af5e-98054050fb03-whisker-ca-bundle\") on node \"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" DevicePath \"\"" Sep 13 00:57:56.814134 kubelet[2221]: I0913 00:57:56.812923 2221 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c908e5-28ce-465e-af5e-98054050fb03-kube-api-access-mvqb5" (OuterVolumeSpecName: "kube-api-access-mvqb5") pod "61c908e5-28ce-465e-af5e-98054050fb03" (UID: "61c908e5-28ce-465e-af5e-98054050fb03"). InnerVolumeSpecName "kube-api-access-mvqb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:57:56.816569 kubelet[2221]: I0913 00:57:56.816535 2221 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c908e5-28ce-465e-af5e-98054050fb03-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "61c908e5-28ce-465e-af5e-98054050fb03" (UID: "61c908e5-28ce-465e-af5e-98054050fb03"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:57:56.908784 kubelet[2221]: I0913 00:57:56.908728 2221 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61c908e5-28ce-465e-af5e-98054050fb03-whisker-backend-key-pair\") on node \"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" DevicePath \"\"" Sep 13 00:57:56.908784 kubelet[2221]: I0913 00:57:56.908785 2221 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvqb5\" (UniqueName: \"kubernetes.io/projected/61c908e5-28ce-465e-af5e-98054050fb03-kube-api-access-mvqb5\") on node \"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4\" DevicePath \"\"" Sep 13 00:57:57.071600 kubelet[2221]: I0913 00:57:57.068962 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bczc8" podStartSLOduration=2.103817852 podStartE2EDuration="20.068932996s" podCreationTimestamp="2025-09-13 00:57:37 +0000 UTC" firstStartedPulling="2025-09-13 00:57:38.163068549 +0000 UTC m=+24.749562565" lastFinishedPulling="2025-09-13 00:57:56.128183688 +0000 UTC m=+42.714677709" observedRunningTime="2025-09-13 00:57:57.068011784 +0000 UTC m=+43.654505818" watchObservedRunningTime="2025-09-13 00:57:57.068932996 +0000 UTC m=+43.655427015" Sep 13 00:57:57.084882 systemd[1]: run-netns-cni\x2d4c4df73b\x2d53c7\x2d2141\x2db22e\x2ded7e6737f373.mount: Deactivated successfully. Sep 13 00:57:57.085602 systemd[1]: var-lib-kubelet-pods-61c908e5\x2d28ce\x2d465e\x2daf5e\x2d98054050fb03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmvqb5.mount: Deactivated successfully. Sep 13 00:57:57.085996 systemd[1]: var-lib-kubelet-pods-61c908e5\x2d28ce\x2d465e\x2daf5e\x2d98054050fb03-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:57:57.216370 kubelet[2221]: I0913 00:57:57.216302 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s25rz\" (UniqueName: \"kubernetes.io/projected/8118311f-7b3e-4bdd-a573-585b128702a0-kube-api-access-s25rz\") pod \"whisker-58bb7999d-g7jwq\" (UID: \"8118311f-7b3e-4bdd-a573-585b128702a0\") " pod="calico-system/whisker-58bb7999d-g7jwq" Sep 13 00:57:57.216625 kubelet[2221]: I0913 00:57:57.216408 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8118311f-7b3e-4bdd-a573-585b128702a0-whisker-backend-key-pair\") pod \"whisker-58bb7999d-g7jwq\" (UID: \"8118311f-7b3e-4bdd-a573-585b128702a0\") " pod="calico-system/whisker-58bb7999d-g7jwq" Sep 13 00:57:57.216625 kubelet[2221]: I0913 00:57:57.216441 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8118311f-7b3e-4bdd-a573-585b128702a0-whisker-ca-bundle\") pod \"whisker-58bb7999d-g7jwq\" (UID: \"8118311f-7b3e-4bdd-a573-585b128702a0\") " pod="calico-system/whisker-58bb7999d-g7jwq" Sep 13 00:57:57.488402 env[1325]: time="2025-09-13T00:57:57.487926101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58bb7999d-g7jwq,Uid:8118311f-7b3e-4bdd-a573-585b128702a0,Namespace:calico-system,Attempt:0,}" Sep 13 00:57:57.653308 systemd-networkd[1071]: cali0b688a104c3: Link UP Sep 13 00:57:57.669777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:57:57.669911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0b688a104c3: link becomes ready Sep 13 00:57:57.670709 systemd-networkd[1071]: cali0b688a104c3: Gained carrier Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.540 [INFO][3405] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.557 [INFO][3405] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0 whisker-58bb7999d- calico-system 8118311f-7b3e-4bdd-a573-585b128702a0 921 0 2025-09-13 00:57:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58bb7999d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4 whisker-58bb7999d-g7jwq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0b688a104c3 [] [] }} ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Namespace="calico-system" Pod="whisker-58bb7999d-g7jwq" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.557 [INFO][3405] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Namespace="calico-system" Pod="whisker-58bb7999d-g7jwq" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.591 [INFO][3418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" HandleID="k8s-pod-network.c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.591 [INFO][3418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" HandleID="k8s-pod-network.c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", "pod":"whisker-58bb7999d-g7jwq", "timestamp":"2025-09-13 00:57:57.591271891 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.591 [INFO][3418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.591 [INFO][3418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.592 [INFO][3418] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.601 [INFO][3418] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.606 [INFO][3418] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.611 [INFO][3418] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.614 [INFO][3418] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.616 [INFO][3418] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.616 [INFO][3418] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.618 [INFO][3418] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.625 [INFO][3418] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.632 [INFO][3418] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.1/26] block=192.168.106.0/26 handle="k8s-pod-network.c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.632 [INFO][3418] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.1/26] handle="k8s-pod-network.c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.633 [INFO][3418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:57:57.691996 env[1325]: 2025-09-13 00:57:57.633 [INFO][3418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.1/26] IPv6=[] ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" HandleID="k8s-pod-network.c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" Sep 13 00:57:57.693258 env[1325]: 2025-09-13 00:57:57.635 [INFO][3405] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Namespace="calico-system" Pod="whisker-58bb7999d-g7jwq" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0", GenerateName:"whisker-58bb7999d-", Namespace:"calico-system", SelfLink:"", UID:"8118311f-7b3e-4bdd-a573-585b128702a0", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58bb7999d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"", Pod:"whisker-58bb7999d-g7jwq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0b688a104c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:57:57.693258 env[1325]: 2025-09-13 00:57:57.635 [INFO][3405] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.1/32] ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Namespace="calico-system" Pod="whisker-58bb7999d-g7jwq" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" Sep 13 00:57:57.693258 env[1325]: 2025-09-13 00:57:57.635 [INFO][3405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b688a104c3 ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Namespace="calico-system" Pod="whisker-58bb7999d-g7jwq" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" Sep 13 00:57:57.693258 env[1325]: 2025-09-13 00:57:57.670 [INFO][3405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Namespace="calico-system" Pod="whisker-58bb7999d-g7jwq" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" Sep 13 00:57:57.693258 env[1325]: 2025-09-13 00:57:57.671 [INFO][3405] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Namespace="calico-system" Pod="whisker-58bb7999d-g7jwq" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0", GenerateName:"whisker-58bb7999d-", Namespace:"calico-system", SelfLink:"", UID:"8118311f-7b3e-4bdd-a573-585b128702a0", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58bb7999d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b", Pod:"whisker-58bb7999d-g7jwq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0b688a104c3", MAC:"b6:67:f9:8d:f4:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:57:57.693258 env[1325]: 2025-09-13 00:57:57.685 [INFO][3405] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b" Namespace="calico-system" Pod="whisker-58bb7999d-g7jwq" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--58bb7999d--g7jwq-eth0" Sep 13 00:57:57.715331 env[1325]: time="2025-09-13T00:57:57.715206065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:57.715331 env[1325]: time="2025-09-13T00:57:57.715278214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:57.715331 env[1325]: time="2025-09-13T00:57:57.715296872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:57.716118 env[1325]: time="2025-09-13T00:57:57.716030249Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b pid=3442 runtime=io.containerd.runc.v2 Sep 13 00:57:57.730570 kubelet[2221]: I0913 00:57:57.730493 2221 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c908e5-28ce-465e-af5e-98054050fb03" path="/var/lib/kubelet/pods/61c908e5-28ce-465e-af5e-98054050fb03/volumes" Sep 13 00:57:57.801129 env[1325]: time="2025-09-13T00:57:57.800495805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58bb7999d-g7jwq,Uid:8118311f-7b3e-4bdd-a573-585b128702a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b\"" Sep 13 00:57:57.804543 env[1325]: time="2025-09-13T00:57:57.804475530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:57:57.921000 audit[3518]: AVC avc: denied { write } for pid=3518 comm="tee" name="fd" dev="proc" ino=25118 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:57:57.943749 kernel: audit: type=1400 audit(1757725077.921:282): avc: denied { write } for pid=3518 comm="tee" name="fd" dev="proc" ino=25118 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:57:57.985643 kernel: audit: type=1300 audit(1757725077.921:282): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbb5b0787 a2=241 a3=1b6 items=1 ppid=3497 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:57.921000 audit[3518]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbb5b0787 a2=241 a3=1b6 items=1 ppid=3497 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:57.921000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 00:57:58.003644 kernel: audit: type=1307 audit(1757725077.921:282): cwd="/etc/service/enabled/felix/log" Sep 13 00:57:57.921000 audit: PATH item=0 name="/dev/fd/63" inode=24256 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:57:58.030640 kernel: audit: type=1302 audit(1757725077.921:282): item=0 name="/dev/fd/63" inode=24256 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:57:57.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:57:58.100647 kernel: audit: type=1327 audit(1757725077.921:282): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:57:58.100837 kernel: audit: type=1400 audit(1757725078.046:283): avc: denied { write } for pid=3532 comm="tee" name="fd" dev="proc" ino=25129 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:57:58.046000 audit[3532]: AVC avc: denied { write } for pid=3532 comm="tee" name="fd" dev="proc" ino=25129 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:57:58.046000 audit[3532]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffda75e5788 a2=241 a3=1b6 items=1 ppid=3485 pid=3532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.144748 kernel: audit: type=1300 audit(1757725078.046:283): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffda75e5788 a2=241 a3=1b6 items=1 ppid=3485 pid=3532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.046000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 00:57:58.154727 kernel: audit: type=1307 audit(1757725078.046:283): cwd="/etc/service/enabled/bird/log" Sep 13 00:57:58.046000 audit: PATH item=0 name="/dev/fd/63" inode=24284 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:57:58.046000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:57:58.198893 kernel: audit: type=1302 audit(1757725078.046:283): item=0 name="/dev/fd/63" inode=24284 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:57:58.199072 kernel: audit: type=1327 audit(1757725078.046:283): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:57:58.047000 audit[3536]: AVC avc: denied { write } for pid=3536 comm="tee" name="fd" dev="proc" ino=25131 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:57:58.047000 audit[3536]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb3d91777 a2=241 a3=1b6 items=1 ppid=3501 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.047000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:57:58.047000 audit: PATH item=0 name="/dev/fd/63" inode=24288 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:57:58.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:57:58.047000 audit[3530]: AVC avc: denied { write } for pid=3530 comm="tee" name="fd" dev="proc" ino=25133 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:57:58.047000 audit[3530]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec40e3787 a2=241 a3=1b6 items=1 ppid=3487 pid=3530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.047000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 00:57:58.047000 audit: PATH item=0 name="/dev/fd/63" inode=24281 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:57:58.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:57:58.047000 audit[3534]: AVC avc: denied { write } for pid=3534 comm="tee" name="fd" dev="proc" ino=25137 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:57:58.047000 audit[3534]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe0aca8778 a2=241 a3=1b6 items=1 ppid=3491 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.047000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:57:58.047000 audit: PATH item=0 name="/dev/fd/63" inode=24287 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:57:58.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:57:58.119000 audit[3538]: AVC avc: denied { write } for pid=3538 comm="tee" name="fd" dev="proc" ino=25142 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:57:58.119000 audit[3538]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe31cf2787 a2=241 a3=1b6 items=1 ppid=3488 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.119000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 00:57:58.119000 audit: PATH item=0 name="/dev/fd/63" inode=24289 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:57:58.119000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:57:58.124000 audit[3553]: AVC avc: denied { write } for pid=3553 comm="tee" name="fd" dev="proc" ino=25144 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:57:58.124000 audit[3553]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc381cc789 a2=241 a3=1b6 items=1 ppid=3495 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.124000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 00:57:58.124000 audit: PATH item=0 name="/dev/fd/63" inode=25124 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:57:58.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit: BPF prog-id=10 op=LOAD Sep 13 00:57:58.564000 audit[3593]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc5c0e300 a2=98 a3=1fffffffffffffff items=0 ppid=3499 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.564000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:57:58.564000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit: BPF prog-id=11 op=LOAD Sep 13 00:57:58.564000 audit[3593]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc5c0e1e0 a2=94 a3=3 items=0 ppid=3499 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.564000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:57:58.564000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { bpf } for pid=3593 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit: BPF prog-id=12 op=LOAD Sep 13 00:57:58.564000 audit[3593]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc5c0e220 a2=94 a3=7fffc5c0e400 items=0 ppid=3499 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.564000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:57:58.564000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:57:58.564000 audit[3593]: AVC avc: denied { perfmon } for pid=3593 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.564000 audit[3593]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fffc5c0e2f0 a2=50 a3=a000000085 items=0 ppid=3499 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.564000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:57:58.568000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.568000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.568000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.568000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.568000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.568000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.568000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.568000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.568000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.568000 audit: BPF prog-id=13 op=LOAD Sep 13 00:57:58.568000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffecdb70160 a2=98 a3=3 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.568000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.569000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:57:58.569000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.569000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.569000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.569000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.569000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.569000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.569000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.569000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.569000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.569000 audit: BPF prog-id=14 op=LOAD Sep 13 00:57:58.569000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffecdb6ff50 a2=94 a3=54428f items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.570000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:57:58.570000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.570000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.570000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.570000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.570000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.570000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.570000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.570000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.570000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.570000 audit: BPF prog-id=15 op=LOAD Sep 13 00:57:58.570000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffecdb6ff80 a2=94 a3=2 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.571000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:57:58.754000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.754000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.754000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.754000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.754000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.754000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.754000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.754000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.754000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.754000 audit: BPF prog-id=16 op=LOAD Sep 13 00:57:58.754000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffecdb6fe40 a2=94 a3=1 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.754000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.755000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:57:58.755000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.755000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffecdb6ff10 a2=50 a3=7ffecdb6fff0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.773000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.773000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffecdb6fe50 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.773000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.774000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.774000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffecdb6fe80 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.774000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.774000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.774000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffecdb6fd90 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.774000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.775000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.775000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffecdb6fea0 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.775000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.775000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.775000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffecdb6fe80 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.775000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.775000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.775000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffecdb6fe70 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.775000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.776000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.776000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffecdb6fea0 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.776000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.776000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.776000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffecdb6fe80 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.776000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.776000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.776000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffecdb6fea0 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.776000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.777000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.777000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffecdb6fe70 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.777000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.777000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.777000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffecdb6fee0 a2=28 a3=0 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.777000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.778000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.778000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffecdb6fc90 a2=50 a3=1 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.778000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.779000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.779000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.779000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.779000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.779000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.779000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.779000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.779000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.779000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.779000 audit: BPF prog-id=17 op=LOAD Sep 13 00:57:58.779000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffecdb6fc90 a2=94 a3=5 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.780000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffecdb6fd40 a2=50 a3=1 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffecdb6fe60 a2=4 a3=38 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.780000 audit[3594]: AVC avc: denied { confidentiality } for pid=3594 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:57:58.780000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffecdb6feb0 a2=94 a3=6 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.782000 audit[3594]: AVC avc: denied { confidentiality } for pid=3594 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:57:58.782000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffecdb6f660 a2=94 a3=88 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { perfmon } for pid=3594 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { bpf } for pid=3594 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.783000 audit[3594]: AVC avc: denied { confidentiality } for pid=3594 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:57:58.783000 audit[3594]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffecdb6f660 a2=94 a3=88 items=0 ppid=3499 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.783000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:57:58.798000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.798000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.798000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.798000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.798000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.798000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.798000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.798000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.798000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.798000 audit: BPF prog-id=18 op=LOAD Sep 13 00:57:58.798000 audit[3597]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda460e870 a2=98 a3=1999999999999999 items=0 ppid=3499 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.798000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:57:58.799000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:57:58.799000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.799000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.799000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.799000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.799000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.799000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.799000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.799000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.799000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.799000 audit: BPF prog-id=19 op=LOAD Sep 13 00:57:58.799000 audit[3597]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda460e750 a2=94 a3=ffff items=0 ppid=3499 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.799000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:57:58.800000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:57:58.801000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.801000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.801000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.801000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.801000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.801000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.801000 audit[3597]: AVC avc: denied { perfmon } for pid=3597 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.801000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.801000 audit[3597]: AVC avc: denied { bpf } for pid=3597 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:58.801000 audit: BPF prog-id=20 op=LOAD Sep 13 00:57:58.801000 audit[3597]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda460e790 a2=94 a3=7ffda460e970 items=0 ppid=3499 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:58.801000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:57:58.802000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:57:58.996056 env[1325]: time="2025-09-13T00:57:58.995989291Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:59.002279 env[1325]: time="2025-09-13T00:57:59.002200961Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:59.006849 env[1325]: time="2025-09-13T00:57:59.004159633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:59.008507 env[1325]: time="2025-09-13T00:57:59.008453899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:57:59.008679 env[1325]: time="2025-09-13T00:57:59.007296796Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:59.011991 systemd-networkd[1071]: vxlan.calico: Link UP Sep 13 00:57:59.012006 systemd-networkd[1071]: vxlan.calico: Gained carrier Sep 13 00:57:59.021881 env[1325]: time="2025-09-13T00:57:59.021833566Z" level=info msg="CreateContainer within sandbox \"c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:57:59.048930 env[1325]: time="2025-09-13T00:57:59.048876573Z" level=info msg="CreateContainer within sandbox \"c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6f1f6d97932a09b805cca273e7dbca062675760188728ed350c0f6c373ea728c\"" Sep 13 00:57:59.052046 env[1325]: time="2025-09-13T00:57:59.051983823Z" level=info msg="StartContainer for \"6f1f6d97932a09b805cca273e7dbca062675760188728ed350c0f6c373ea728c\"" Sep 13 00:57:59.089000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.089000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.089000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.089000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.089000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.089000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.089000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.089000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.089000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.089000 audit: BPF prog-id=21 op=LOAD Sep 13 00:57:59.089000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9c271f60 a2=98 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.089000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.094000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:57:59.094000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.094000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.094000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.094000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.094000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.094000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.094000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.094000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.094000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.094000 audit: BPF prog-id=22 op=LOAD Sep 13 00:57:59.094000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9c271d70 a2=94 a3=54428f items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit: BPF prog-id=23 op=LOAD Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9c271da0 a2=94 a3=2 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe9c271c70 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9c271ca0 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9c271bb0 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe9c271cc0 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe9c271ca0 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe9c271c90 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe9c271cc0 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9c271ca0 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9c271cc0 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9c271c90 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe9c271d00 a2=28 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.095000 audit: BPF prog-id=24 op=LOAD Sep 13 00:57:59.095000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe9c271b70 a2=94 a3=0 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.095000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.095000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:57:59.096000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.096000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe9c271b60 a2=50 a3=2800 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.096000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe9c271b60 a2=50 a3=2800 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.098000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.098000 audit: BPF prog-id=25 op=LOAD Sep 13 00:57:59.098000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe9c271380 a2=94 a3=2 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.098000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.099000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { perfmon } for pid=3638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit[3638]: AVC avc: denied { bpf } for pid=3638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.099000 audit: BPF prog-id=26 op=LOAD Sep 13 00:57:59.099000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe9c271480 a2=94 a3=30 items=0 ppid=3499 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:57:59.129000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.129000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.129000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.129000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.129000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.129000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.129000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.129000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.129000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.129000 audit: BPF prog-id=27 op=LOAD Sep 13 00:57:59.129000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe12238d20 a2=98 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.129000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.129000 audit: BPF prog-id=27 op=UNLOAD Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit: BPF prog-id=28 op=LOAD Sep 13 00:57:59.130000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe12238b10 a2=94 a3=54428f items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.130000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.130000 audit: BPF prog-id=28 op=UNLOAD Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.130000 audit: BPF prog-id=29 op=LOAD Sep 13 00:57:59.130000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe12238b40 a2=94 a3=2 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.130000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.130000 audit: BPF prog-id=29 op=UNLOAD Sep 13 00:57:59.240861 env[1325]: time="2025-09-13T00:57:59.240779812Z" level=info msg="StartContainer for \"6f1f6d97932a09b805cca273e7dbca062675760188728ed350c0f6c373ea728c\" returns successfully" Sep 13 00:57:59.245841 env[1325]: time="2025-09-13T00:57:59.245790183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit: BPF prog-id=30 op=LOAD Sep 13 00:57:59.330000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe12238a00 a2=94 a3=1 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.330000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.330000 audit: BPF prog-id=30 op=UNLOAD Sep 13 00:57:59.330000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.330000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe12238ad0 a2=50 a3=7ffe12238bb0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.330000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe12238a10 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe12238a40 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe12238950 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe12238a60 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe12238a40 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe12238a30 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe12238a60 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe12238a40 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe12238a60 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe12238a30 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.347000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.347000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe12238aa0 a2=28 a3=0 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe12238850 a2=50 a3=1 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit: BPF prog-id=31 op=LOAD Sep 13 00:57:59.348000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe12238850 a2=94 a3=5 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.348000 audit: BPF prog-id=31 op=UNLOAD Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe12238900 a2=50 a3=1 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe12238a20 a2=4 a3=38 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { confidentiality } for pid=3648 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:57:59.348000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe12238a70 a2=94 a3=6 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.348000 audit[3648]: AVC avc: denied { confidentiality } for pid=3648 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:57:59.348000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe12238220 a2=94 a3=88 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.349000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.349000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.349000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.349000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.349000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.349000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.349000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.349000 audit[3648]: AVC avc: denied { perfmon } for pid=3648 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.349000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe12238220 a2=94 a3=88 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.349000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.349000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe12239c50 a2=10 a3=f8f00800 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.350000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.350000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe12239af0 a2=10 a3=3 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.350000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.350000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe12239a90 a2=10 a3=3 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.350000 audit[3648]: AVC avc: denied { bpf } for pid=3648 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:57:59.350000 audit[3648]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe12239a90 a2=10 a3=7 items=0 ppid=3499 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:57:59.359000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:57:59.454841 systemd-networkd[1071]: cali0b688a104c3: Gained IPv6LL Sep 13 00:57:59.465000 audit[3689]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3689 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:57:59.465000 audit[3689]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff765aa430 a2=0 a3=7fff765aa41c items=0 ppid=3499 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.465000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:57:59.487000 audit[3687]: NETFILTER_CFG table=raw:102 family=2 entries=21 op=nft_register_chain pid=3687 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:57:59.487000 audit[3687]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd58b62d70 a2=0 a3=7ffd58b62d5c items=0 ppid=3499 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.487000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:57:59.487000 audit[3688]: NETFILTER_CFG table=nat:103 family=2 entries=15 op=nft_register_chain pid=3688 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:57:59.487000 audit[3688]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcf50c15c0 a2=0 a3=558ba169f000 items=0 ppid=3499 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.487000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:57:59.496000 audit[3690]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=3690 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:57:59.496000 audit[3690]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffd66414320 a2=0 a3=7ffd6641430c items=0 ppid=3499 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:57:59.496000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:57:59.721232 env[1325]: time="2025-09-13T00:57:59.721155465Z" level=info msg="StopPodSandbox for \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\"" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.798 [INFO][3715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.799 [INFO][3715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" iface="eth0" netns="/var/run/netns/cni-b12bd6cd-bd04-6557-195d-335551c33dd2" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.799 [INFO][3715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" iface="eth0" netns="/var/run/netns/cni-b12bd6cd-bd04-6557-195d-335551c33dd2" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.799 [INFO][3715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" iface="eth0" netns="/var/run/netns/cni-b12bd6cd-bd04-6557-195d-335551c33dd2" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.799 [INFO][3715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.799 [INFO][3715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.829 [INFO][3723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" HandleID="k8s-pod-network.19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.829 [INFO][3723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.829 [INFO][3723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.839 [WARNING][3723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" HandleID="k8s-pod-network.19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.840 [INFO][3723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" HandleID="k8s-pod-network.19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.842 [INFO][3723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:57:59.846695 env[1325]: 2025-09-13 00:57:59.844 [INFO][3715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:57:59.846695 env[1325]: time="2025-09-13T00:57:59.846829777Z" level=info msg="TearDown network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\" successfully" Sep 13 00:57:59.846695 env[1325]: time="2025-09-13T00:57:59.846895564Z" level=info msg="StopPodSandbox for \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\" returns successfully" Sep 13 00:57:59.846695 env[1325]: time="2025-09-13T00:57:59.848031630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-28jg5,Uid:820afca3-f77c-4bac-b219-c18864653831,Namespace:kube-system,Attempt:1,}" Sep 13 00:57:59.855032 systemd[1]: run-netns-cni\x2db12bd6cd\x2dbd04\x2d6557\x2d195d\x2d335551c33dd2.mount: Deactivated successfully. Sep 13 00:58:00.034593 systemd-networkd[1071]: cali2b538b57bc0: Link UP Sep 13 00:58:00.043740 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2b538b57bc0: link becomes ready Sep 13 00:58:00.047366 systemd-networkd[1071]: cali2b538b57bc0: Gained carrier Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:57:59.930 [INFO][3730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0 coredns-7c65d6cfc9- kube-system 820afca3-f77c-4bac-b219-c18864653831 934 0 2025-09-13 00:57:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4 coredns-7c65d6cfc9-28jg5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b538b57bc0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28jg5" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:57:59.930 [INFO][3730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28jg5" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:57:59.976 [INFO][3742] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" HandleID="k8s-pod-network.9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:57:59.976 [INFO][3742] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" HandleID="k8s-pod-network.9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", "pod":"coredns-7c65d6cfc9-28jg5", "timestamp":"2025-09-13 00:57:59.976495545 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:57:59.977 [INFO][3742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:57:59.977 [INFO][3742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:57:59.977 [INFO][3742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:57:59.988 [INFO][3742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:57:59.994 [INFO][3742] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.000 [INFO][3742] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.003 [INFO][3742] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.006 [INFO][3742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.006 [INFO][3742] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.008 [INFO][3742] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.013 [INFO][3742] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.023 [INFO][3742] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.2/26] block=192.168.106.0/26 handle="k8s-pod-network.9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.023 [INFO][3742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.2/26] handle="k8s-pod-network.9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.023 [INFO][3742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:00.073586 env[1325]: 2025-09-13 00:58:00.023 [INFO][3742] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.2/26] IPv6=[] ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" HandleID="k8s-pod-network.9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:00.075410 env[1325]: 2025-09-13 00:58:00.026 [INFO][3730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28jg5" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"820afca3-f77c-4bac-b219-c18864653831", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"", Pod:"coredns-7c65d6cfc9-28jg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b538b57bc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:00.075410 env[1325]: 2025-09-13 00:58:00.026 [INFO][3730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.2/32] ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28jg5" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:00.075410 env[1325]: 2025-09-13 00:58:00.026 [INFO][3730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b538b57bc0 ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28jg5" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:00.075410 env[1325]: 2025-09-13 00:58:00.048 [INFO][3730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28jg5" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:00.075410 env[1325]: 2025-09-13 00:58:00.048 [INFO][3730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28jg5" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"820afca3-f77c-4bac-b219-c18864653831", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa", Pod:"coredns-7c65d6cfc9-28jg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b538b57bc0", MAC:"76:13:38:15:50:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:00.075410 env[1325]: 2025-09-13 00:58:00.066 [INFO][3730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28jg5" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:00.094000 audit[3766]: NETFILTER_CFG table=filter:105 family=2 entries=42 op=nft_register_chain pid=3766 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:58:00.094000 audit[3766]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffca015c1d0 a2=0 a3=7ffca015c1bc items=0 ppid=3499 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:00.094000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:58:00.096850 systemd-networkd[1071]: vxlan.calico: Gained IPv6LL Sep 13 00:58:00.097690 env[1325]: time="2025-09-13T00:58:00.094069397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:58:00.097690 env[1325]: time="2025-09-13T00:58:00.094123265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:58:00.097690 env[1325]: time="2025-09-13T00:58:00.094144001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:58:00.097690 env[1325]: time="2025-09-13T00:58:00.094372466Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa pid=3765 runtime=io.containerd.runc.v2 Sep 13 00:58:00.148957 systemd[1]: run-containerd-runc-k8s.io-9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa-runc.J11Mdx.mount: Deactivated successfully. Sep 13 00:58:00.220747 env[1325]: time="2025-09-13T00:58:00.220697965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-28jg5,Uid:820afca3-f77c-4bac-b219-c18864653831,Namespace:kube-system,Attempt:1,} returns sandbox id \"9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa\"" Sep 13 00:58:00.229892 env[1325]: time="2025-09-13T00:58:00.229807402Z" level=info msg="CreateContainer within sandbox \"9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:58:00.266707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863386361.mount: Deactivated successfully. Sep 13 00:58:00.274536 env[1325]: time="2025-09-13T00:58:00.274451729Z" level=info msg="CreateContainer within sandbox \"9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b78cf98ef60d6e662a299aa0c9c01a7e1e1d513d42d52b5b24235913f2c6067\"" Sep 13 00:58:00.282455 env[1325]: time="2025-09-13T00:58:00.282397133Z" level=info msg="StartContainer for \"5b78cf98ef60d6e662a299aa0c9c01a7e1e1d513d42d52b5b24235913f2c6067\"" Sep 13 00:58:00.392997 env[1325]: time="2025-09-13T00:58:00.392904849Z" level=info msg="StartContainer for \"5b78cf98ef60d6e662a299aa0c9c01a7e1e1d513d42d52b5b24235913f2c6067\" returns successfully" Sep 13 00:58:01.104786 kubelet[2221]: I0913 00:58:01.104702 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-28jg5" podStartSLOduration=41.104659475 podStartE2EDuration="41.104659475s" podCreationTimestamp="2025-09-13 00:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:58:01.104048189 +0000 UTC m=+47.690542202" watchObservedRunningTime="2025-09-13 00:58:01.104659475 +0000 UTC m=+47.691153496" Sep 13 00:58:01.177000 audit[3838]: NETFILTER_CFG table=filter:106 family=2 entries=17 op=nft_register_rule pid=3838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:01.177000 audit[3838]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd5908d180 a2=0 a3=7ffd5908d16c items=0 ppid=2341 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:01.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:01.183372 systemd-networkd[1071]: cali2b538b57bc0: Gained IPv6LL Sep 13 00:58:01.184000 audit[3838]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=3838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:01.184000 audit[3838]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd5908d180 a2=0 a3=7ffd5908d16c items=0 ppid=2341 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:01.184000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:01.533661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount867682400.mount: Deactivated successfully. Sep 13 00:58:01.557443 env[1325]: time="2025-09-13T00:58:01.557371214Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:01.560440 env[1325]: time="2025-09-13T00:58:01.560394561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:01.562593 env[1325]: time="2025-09-13T00:58:01.562541476Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:01.564908 env[1325]: time="2025-09-13T00:58:01.564860821Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:01.565793 env[1325]: time="2025-09-13T00:58:01.565745345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:58:01.570423 env[1325]: time="2025-09-13T00:58:01.570377128Z" level=info msg="CreateContainer within sandbox \"c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:58:01.593990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327714995.mount: Deactivated successfully. Sep 13 00:58:01.594572 env[1325]: time="2025-09-13T00:58:01.594436649Z" level=info msg="CreateContainer within sandbox \"c6d1f9286dfcc48ac2717274facfa1ec078ae6266d0f83abc67526f84568756b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"23568fe9dcfd203e9505925a620abacd313b8f3f542bc828ea4b6b19da7718f2\"" Sep 13 00:58:01.597249 env[1325]: time="2025-09-13T00:58:01.596484588Z" level=info msg="StartContainer for \"23568fe9dcfd203e9505925a620abacd313b8f3f542bc828ea4b6b19da7718f2\"" Sep 13 00:58:01.713042 env[1325]: time="2025-09-13T00:58:01.712971873Z" level=info msg="StartContainer for \"23568fe9dcfd203e9505925a620abacd313b8f3f542bc828ea4b6b19da7718f2\" returns successfully" Sep 13 00:58:01.724737 env[1325]: time="2025-09-13T00:58:01.720701691Z" level=info msg="StopPodSandbox for \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\"" Sep 13 00:58:01.724737 env[1325]: time="2025-09-13T00:58:01.721286211Z" level=info msg="StopPodSandbox for \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\"" Sep 13 00:58:01.724737 env[1325]: time="2025-09-13T00:58:01.721834819Z" level=info msg="StopPodSandbox for \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\"" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.930 [INFO][3907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.931 [INFO][3907] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" iface="eth0" netns="/var/run/netns/cni-cd68f66d-f8b4-b80f-1217-648093dd141d" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.932 [INFO][3907] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" iface="eth0" netns="/var/run/netns/cni-cd68f66d-f8b4-b80f-1217-648093dd141d" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.932 [INFO][3907] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" iface="eth0" netns="/var/run/netns/cni-cd68f66d-f8b4-b80f-1217-648093dd141d" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.932 [INFO][3907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.932 [INFO][3907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.972 [INFO][3936] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" HandleID="k8s-pod-network.1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.972 [INFO][3936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.972 [INFO][3936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.983 [WARNING][3936] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" HandleID="k8s-pod-network.1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.983 [INFO][3936] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" HandleID="k8s-pod-network.1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.985 [INFO][3936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:01.995162 env[1325]: 2025-09-13 00:58:01.987 [INFO][3907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:01.996359 env[1325]: time="2025-09-13T00:58:01.996295289Z" level=info msg="TearDown network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\" successfully" Sep 13 00:58:01.996549 env[1325]: time="2025-09-13T00:58:01.996517543Z" level=info msg="StopPodSandbox for \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\" returns successfully" Sep 13 00:58:01.997799 env[1325]: time="2025-09-13T00:58:01.997756505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649475f784-nsdkw,Uid:c749baa2-250c-406e-806c-5781eafb74e7,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:01.869 [INFO][3899] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:01.870 [INFO][3899] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" iface="eth0" netns="/var/run/netns/cni-932429f2-50fb-2e03-ca22-d90647ef592c" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:01.870 [INFO][3899] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" iface="eth0" netns="/var/run/netns/cni-932429f2-50fb-2e03-ca22-d90647ef592c" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:01.873 [INFO][3899] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" iface="eth0" netns="/var/run/netns/cni-932429f2-50fb-2e03-ca22-d90647ef592c" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:01.873 [INFO][3899] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:01.873 [INFO][3899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:01.984 [INFO][3924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" HandleID="k8s-pod-network.4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:01.984 [INFO][3924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:01.987 [INFO][3924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:02.001 [WARNING][3924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" HandleID="k8s-pod-network.4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:02.001 [INFO][3924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" HandleID="k8s-pod-network.4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:02.007 [INFO][3924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:02.014502 env[1325]: 2025-09-13 00:58:02.012 [INFO][3899] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:02.015662 env[1325]: time="2025-09-13T00:58:02.015595305Z" level=info msg="TearDown network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\" successfully" Sep 13 00:58:02.015794 env[1325]: time="2025-09-13T00:58:02.015768071Z" level=info msg="StopPodSandbox for \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\" returns successfully" Sep 13 00:58:02.018094 env[1325]: time="2025-09-13T00:58:02.018054532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b845c7695-sr7sp,Uid:eca60e6b-177b-4588-8e7f-a2dc081264e1,Namespace:calico-system,Attempt:1,}" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:01.904 [INFO][3903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:01.904 [INFO][3903] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" iface="eth0" netns="/var/run/netns/cni-7eb67576-05a8-d2f9-9d00-c54c588b63b5" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:01.905 [INFO][3903] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" iface="eth0" netns="/var/run/netns/cni-7eb67576-05a8-d2f9-9d00-c54c588b63b5" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:01.905 [INFO][3903] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" iface="eth0" netns="/var/run/netns/cni-7eb67576-05a8-d2f9-9d00-c54c588b63b5" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:01.905 [INFO][3903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:01.905 [INFO][3903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:01.999 [INFO][3930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" HandleID="k8s-pod-network.6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:02.000 [INFO][3930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:02.007 [INFO][3930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:02.024 [WARNING][3930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" HandleID="k8s-pod-network.6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:02.024 [INFO][3930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" HandleID="k8s-pod-network.6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:02.027 [INFO][3930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:02.043165 env[1325]: 2025-09-13 00:58:02.034 [INFO][3903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:02.044320 env[1325]: time="2025-09-13T00:58:02.043343504Z" level=info msg="TearDown network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\" successfully" Sep 13 00:58:02.044320 env[1325]: time="2025-09-13T00:58:02.043395347Z" level=info msg="StopPodSandbox for \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\" returns successfully" Sep 13 00:58:02.045691 env[1325]: time="2025-09-13T00:58:02.045647175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649475f784-qqkdv,Uid:5482a9ae-642d-42d0-b694-214ca0591875,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:58:02.155317 systemd[1]: run-netns-cni\x2d7eb67576\x2d05a8\x2dd2f9\x2d9d00\x2dc54c588b63b5.mount: Deactivated successfully. Sep 13 00:58:02.155537 systemd[1]: run-netns-cni\x2d932429f2\x2d50fb\x2d2e03\x2dca22\x2dd90647ef592c.mount: Deactivated successfully. Sep 13 00:58:02.155721 systemd[1]: run-netns-cni\x2dcd68f66d\x2df8b4\x2db80f\x2d1217\x2d648093dd141d.mount: Deactivated successfully. Sep 13 00:58:02.201000 audit[3981]: NETFILTER_CFG table=filter:108 family=2 entries=13 op=nft_register_rule pid=3981 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:02.201000 audit[3981]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd91010780 a2=0 a3=7ffd9101076c items=0 ppid=2341 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:02.201000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:02.206000 audit[3981]: NETFILTER_CFG table=nat:109 family=2 entries=27 op=nft_register_chain pid=3981 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:02.206000 audit[3981]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd91010780 a2=0 a3=7ffd9101076c items=0 ppid=2341 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:02.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:02.370068 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:58:02.360031 systemd-networkd[1071]: calia599228491e: Link UP Sep 13 00:58:02.378784 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia599228491e: link becomes ready Sep 13 00:58:02.385876 kubelet[2221]: I0913 00:58:02.385785 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58bb7999d-g7jwq" podStartSLOduration=1.620586121 podStartE2EDuration="5.385755613s" podCreationTimestamp="2025-09-13 00:57:57 +0000 UTC" firstStartedPulling="2025-09-13 00:57:57.802090984 +0000 UTC m=+44.388584988" lastFinishedPulling="2025-09-13 00:58:01.567260471 +0000 UTC m=+48.153754480" observedRunningTime="2025-09-13 00:58:02.158784212 +0000 UTC m=+48.745278232" watchObservedRunningTime="2025-09-13 00:58:02.385755613 +0000 UTC m=+48.972249633" Sep 13 00:58:02.389167 systemd-networkd[1071]: calia599228491e: Gained carrier Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.113 [INFO][3944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0 calico-apiserver-649475f784- calico-apiserver c749baa2-250c-406e-806c-5781eafb74e7 958 0 2025-09-13 00:57:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:649475f784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4 calico-apiserver-649475f784-nsdkw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia599228491e [] [] }} ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-nsdkw" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.113 [INFO][3944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-nsdkw" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.278 [INFO][3978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" HandleID="k8s-pod-network.9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.278 [INFO][3978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" HandleID="k8s-pod-network.9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", "pod":"calico-apiserver-649475f784-nsdkw", "timestamp":"2025-09-13 00:58:02.277702499 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.278 [INFO][3978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.279 [INFO][3978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.279 [INFO][3978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.295 [INFO][3978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.304 [INFO][3978] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.310 [INFO][3978] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.318 [INFO][3978] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.322 [INFO][3978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.322 [INFO][3978] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.324 [INFO][3978] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177 Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.329 [INFO][3978] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.338 [INFO][3978] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.3/26] block=192.168.106.0/26 handle="k8s-pod-network.9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.338 [INFO][3978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.3/26] handle="k8s-pod-network.9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.338 [INFO][3978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:02.393183 env[1325]: 2025-09-13 00:58:02.339 [INFO][3978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.3/26] IPv6=[] ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" HandleID="k8s-pod-network.9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:02.396024 env[1325]: 2025-09-13 00:58:02.347 [INFO][3944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-nsdkw" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0", GenerateName:"calico-apiserver-649475f784-", Namespace:"calico-apiserver", SelfLink:"", UID:"c749baa2-250c-406e-806c-5781eafb74e7", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649475f784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"", Pod:"calico-apiserver-649475f784-nsdkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia599228491e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:02.396024 env[1325]: 2025-09-13 00:58:02.347 [INFO][3944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.3/32] ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-nsdkw" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:02.396024 env[1325]: 2025-09-13 00:58:02.347 [INFO][3944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia599228491e ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-nsdkw" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:02.396024 env[1325]: 2025-09-13 00:58:02.361 [INFO][3944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-nsdkw" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:02.396024 env[1325]: 2025-09-13 00:58:02.362 [INFO][3944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-nsdkw" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0", GenerateName:"calico-apiserver-649475f784-", Namespace:"calico-apiserver", SelfLink:"", UID:"c749baa2-250c-406e-806c-5781eafb74e7", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649475f784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177", Pod:"calico-apiserver-649475f784-nsdkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia599228491e", MAC:"1a:73:41:f6:df:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:02.396024 env[1325]: 2025-09-13 00:58:02.391 [INFO][3944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-nsdkw" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:02.475774 env[1325]: time="2025-09-13T00:58:02.475674516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:58:02.478200 env[1325]: time="2025-09-13T00:58:02.478120460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:58:02.478487 env[1325]: time="2025-09-13T00:58:02.478433005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:58:02.478975 env[1325]: time="2025-09-13T00:58:02.478914617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177 pid=4019 runtime=io.containerd.runc.v2 Sep 13 00:58:02.482000 audit[4008]: NETFILTER_CFG table=filter:110 family=2 entries=54 op=nft_register_chain pid=4008 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:58:02.482000 audit[4008]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffd5b139c40 a2=0 a3=7ffd5b139c2c items=0 ppid=3499 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:02.482000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:58:02.527858 systemd-networkd[1071]: calia79c7d24088: Link UP Sep 13 00:58:02.537687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia79c7d24088: link becomes ready Sep 13 00:58:02.539806 systemd-networkd[1071]: calia79c7d24088: Gained carrier Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.192 [INFO][3952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0 calico-kube-controllers-b845c7695- calico-system eca60e6b-177b-4588-8e7f-a2dc081264e1 956 0 2025-09-13 00:57:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b845c7695 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4 calico-kube-controllers-b845c7695-sr7sp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia79c7d24088 [] [] }} ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Namespace="calico-system" Pod="calico-kube-controllers-b845c7695-sr7sp" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.193 [INFO][3952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Namespace="calico-system" Pod="calico-kube-controllers-b845c7695-sr7sp" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.316 [INFO][3986] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" HandleID="k8s-pod-network.6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.317 [INFO][3986] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" HandleID="k8s-pod-network.6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", "pod":"calico-kube-controllers-b845c7695-sr7sp", "timestamp":"2025-09-13 00:58:02.316950555 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.317 [INFO][3986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.340 [INFO][3986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.340 [INFO][3986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.395 [INFO][3986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.415 [INFO][3986] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.423 [INFO][3986] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.427 [INFO][3986] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.431 [INFO][3986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.431 [INFO][3986] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.443 [INFO][3986] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.450 [INFO][3986] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.460 [INFO][3986] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.4/26] block=192.168.106.0/26 handle="k8s-pod-network.6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.461 [INFO][3986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.4/26] handle="k8s-pod-network.6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.461 [INFO][3986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:02.565933 env[1325]: 2025-09-13 00:58:02.480 [INFO][3986] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.4/26] IPv6=[] ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" HandleID="k8s-pod-network.6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.567700 env[1325]: 2025-09-13 00:58:02.482 [INFO][3952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Namespace="calico-system" Pod="calico-kube-controllers-b845c7695-sr7sp" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0", GenerateName:"calico-kube-controllers-b845c7695-", Namespace:"calico-system", SelfLink:"", UID:"eca60e6b-177b-4588-8e7f-a2dc081264e1", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b845c7695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"", Pod:"calico-kube-controllers-b845c7695-sr7sp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia79c7d24088", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:02.567700 env[1325]: 2025-09-13 00:58:02.483 [INFO][3952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.4/32] ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Namespace="calico-system" Pod="calico-kube-controllers-b845c7695-sr7sp" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.567700 env[1325]: 2025-09-13 00:58:02.483 [INFO][3952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia79c7d24088 ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Namespace="calico-system" Pod="calico-kube-controllers-b845c7695-sr7sp" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.567700 env[1325]: 2025-09-13 00:58:02.541 [INFO][3952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Namespace="calico-system" Pod="calico-kube-controllers-b845c7695-sr7sp" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.567700 env[1325]: 2025-09-13 00:58:02.541 [INFO][3952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Namespace="calico-system" Pod="calico-kube-controllers-b845c7695-sr7sp" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0", GenerateName:"calico-kube-controllers-b845c7695-", Namespace:"calico-system", SelfLink:"", UID:"eca60e6b-177b-4588-8e7f-a2dc081264e1", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b845c7695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c", Pod:"calico-kube-controllers-b845c7695-sr7sp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia79c7d24088", MAC:"ea:33:81:73:46:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:02.567700 env[1325]: 2025-09-13 00:58:02.560 [INFO][3952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c" Namespace="calico-system" Pod="calico-kube-controllers-b845c7695-sr7sp" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:02.616220 systemd[1]: run-containerd-runc-k8s.io-9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177-runc.N3GqeR.mount: Deactivated successfully. Sep 13 00:58:02.639453 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4122fbf5ed7: link becomes ready Sep 13 00:58:02.641696 systemd-networkd[1071]: cali4122fbf5ed7: Link UP Sep 13 00:58:02.641990 systemd-networkd[1071]: cali4122fbf5ed7: Gained carrier Sep 13 00:58:02.663000 audit[4049]: NETFILTER_CFG table=filter:111 family=2 entries=50 op=nft_register_chain pid=4049 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:58:02.663000 audit[4049]: SYSCALL arch=c000003e syscall=46 success=yes exit=24804 a0=3 a1=7ffd1050d340 a2=0 a3=7ffd1050d32c items=0 ppid=3499 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:02.663000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.262 [INFO][3965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0 calico-apiserver-649475f784- calico-apiserver 5482a9ae-642d-42d0-b694-214ca0591875 957 0 2025-09-13 00:57:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:649475f784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4 calico-apiserver-649475f784-qqkdv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4122fbf5ed7 [] [] }} ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-qqkdv" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.262 [INFO][3965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-qqkdv" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.377 [INFO][3994] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" HandleID="k8s-pod-network.2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.378 [INFO][3994] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" HandleID="k8s-pod-network.2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", "pod":"calico-apiserver-649475f784-qqkdv", "timestamp":"2025-09-13 00:58:02.377674322 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.393 [INFO][3994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.461 [INFO][3994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.461 [INFO][3994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.536 [INFO][3994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.552 [INFO][3994] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.566 [INFO][3994] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.569 [INFO][3994] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.573 [INFO][3994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.574 [INFO][3994] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.576 [INFO][3994] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.585 [INFO][3994] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.611 [INFO][3994] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.5/26] block=192.168.106.0/26 handle="k8s-pod-network.2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.611 [INFO][3994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.5/26] handle="k8s-pod-network.2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.611 [INFO][3994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:02.687651 env[1325]: 2025-09-13 00:58:02.611 [INFO][3994] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.5/26] IPv6=[] ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" HandleID="k8s-pod-network.2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.688935 env[1325]: 2025-09-13 00:58:02.623 [INFO][3965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-qqkdv" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0", GenerateName:"calico-apiserver-649475f784-", Namespace:"calico-apiserver", SelfLink:"", UID:"5482a9ae-642d-42d0-b694-214ca0591875", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649475f784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"", Pod:"calico-apiserver-649475f784-qqkdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4122fbf5ed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:02.688935 env[1325]: 2025-09-13 00:58:02.624 [INFO][3965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.5/32] ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-qqkdv" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.688935 env[1325]: 2025-09-13 00:58:02.624 [INFO][3965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4122fbf5ed7 ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-qqkdv" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.688935 env[1325]: 2025-09-13 00:58:02.651 [INFO][3965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-qqkdv" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.688935 env[1325]: 2025-09-13 00:58:02.661 [INFO][3965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-qqkdv" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0", GenerateName:"calico-apiserver-649475f784-", Namespace:"calico-apiserver", SelfLink:"", UID:"5482a9ae-642d-42d0-b694-214ca0591875", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649475f784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb", Pod:"calico-apiserver-649475f784-qqkdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4122fbf5ed7", MAC:"52:76:fe:7b:09:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:02.688935 env[1325]: 2025-09-13 00:58:02.678 [INFO][3965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb" Namespace="calico-apiserver" Pod="calico-apiserver-649475f784-qqkdv" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:02.704355 env[1325]: time="2025-09-13T00:58:02.695794517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:58:02.704355 env[1325]: time="2025-09-13T00:58:02.700735715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:58:02.704355 env[1325]: time="2025-09-13T00:58:02.700760992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:58:02.709713 env[1325]: time="2025-09-13T00:58:02.705943841Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c pid=4062 runtime=io.containerd.runc.v2 Sep 13 00:58:02.740000 audit[4082]: NETFILTER_CFG table=filter:112 family=2 entries=45 op=nft_register_chain pid=4082 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:58:02.740000 audit[4082]: SYSCALL arch=c000003e syscall=46 success=yes exit=24248 a0=3 a1=7fff7ba81610 a2=0 a3=7fff7ba815fc items=0 ppid=3499 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:02.740000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:58:02.754194 env[1325]: time="2025-09-13T00:58:02.754070327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649475f784-nsdkw,Uid:c749baa2-250c-406e-806c-5781eafb74e7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177\"" Sep 13 00:58:02.758024 env[1325]: time="2025-09-13T00:58:02.756581492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:58:02.784435 env[1325]: time="2025-09-13T00:58:02.784207087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:58:02.784435 env[1325]: time="2025-09-13T00:58:02.784267643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:58:02.784435 env[1325]: time="2025-09-13T00:58:02.784285786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:58:02.785115 env[1325]: time="2025-09-13T00:58:02.785044342Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb pid=4104 runtime=io.containerd.runc.v2 Sep 13 00:58:02.857921 env[1325]: time="2025-09-13T00:58:02.857843433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b845c7695-sr7sp,Uid:eca60e6b-177b-4588-8e7f-a2dc081264e1,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c\"" Sep 13 00:58:02.896485 env[1325]: time="2025-09-13T00:58:02.896317855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649475f784-qqkdv,Uid:5482a9ae-642d-42d0-b694-214ca0591875,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb\"" Sep 13 00:58:03.722377 env[1325]: time="2025-09-13T00:58:03.722324978Z" level=info msg="StopPodSandbox for \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\"" Sep 13 00:58:03.737181 env[1325]: time="2025-09-13T00:58:03.737124874Z" level=info msg="StopPodSandbox for \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\"" Sep 13 00:58:03.878460 systemd-networkd[1071]: cali4122fbf5ed7: Gained IPv6LL Sep 13 00:58:03.935193 systemd-networkd[1071]: calia79c7d24088: Gained IPv6LL Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.853 [INFO][4173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.853 [INFO][4173] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" iface="eth0" netns="/var/run/netns/cni-791462fd-dba5-3f53-3245-3cf4c2cbc459" Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.853 [INFO][4173] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" iface="eth0" netns="/var/run/netns/cni-791462fd-dba5-3f53-3245-3cf4c2cbc459" Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.854 [INFO][4173] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" iface="eth0" netns="/var/run/netns/cni-791462fd-dba5-3f53-3245-3cf4c2cbc459" Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.854 [INFO][4173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.854 [INFO][4173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.941 [INFO][4185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" HandleID="k8s-pod-network.9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.941 [INFO][4185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.941 [INFO][4185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.951 [WARNING][4185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" HandleID="k8s-pod-network.9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.952 [INFO][4185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" HandleID="k8s-pod-network.9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.954 [INFO][4185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:03.958507 env[1325]: 2025-09-13 00:58:03.956 [INFO][4173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:03.959536 env[1325]: time="2025-09-13T00:58:03.959485595Z" level=info msg="TearDown network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\" successfully" Sep 13 00:58:03.959724 env[1325]: time="2025-09-13T00:58:03.959693453Z" level=info msg="StopPodSandbox for \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\" returns successfully" Sep 13 00:58:03.960914 env[1325]: time="2025-09-13T00:58:03.960874776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x5h4h,Uid:02a91fdd-1f7d-4977-ad95-07ea1dc01154,Namespace:kube-system,Attempt:1,}" Sep 13 00:58:03.964821 systemd[1]: run-netns-cni\x2d791462fd\x2ddba5\x2d3f53\x2d3245\x2d3cf4c2cbc459.mount: Deactivated successfully. Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:03.900 [INFO][4177] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:03.900 [INFO][4177] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" iface="eth0" netns="/var/run/netns/cni-49b4e23b-2743-2e97-b728-69e69c118381" Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:03.900 [INFO][4177] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" iface="eth0" netns="/var/run/netns/cni-49b4e23b-2743-2e97-b728-69e69c118381" Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:03.900 [INFO][4177] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" iface="eth0" netns="/var/run/netns/cni-49b4e23b-2743-2e97-b728-69e69c118381" Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:03.900 [INFO][4177] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:03.900 [INFO][4177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:04.054 [INFO][4192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" HandleID="k8s-pod-network.7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:04.055 [INFO][4192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:04.055 [INFO][4192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:04.063 [WARNING][4192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" HandleID="k8s-pod-network.7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:04.063 [INFO][4192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" HandleID="k8s-pod-network.7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:04.065 [INFO][4192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:04.070883 env[1325]: 2025-09-13 00:58:04.068 [INFO][4177] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:04.077996 systemd[1]: run-netns-cni\x2d49b4e23b\x2d2743\x2d2e97\x2db728\x2d69e69c118381.mount: Deactivated successfully. Sep 13 00:58:04.078365 env[1325]: time="2025-09-13T00:58:04.078285857Z" level=info msg="TearDown network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\" successfully" Sep 13 00:58:04.078536 env[1325]: time="2025-09-13T00:58:04.078506843Z" level=info msg="StopPodSandbox for \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\" returns successfully" Sep 13 00:58:04.086689 env[1325]: time="2025-09-13T00:58:04.086600873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-mqrbm,Uid:7a8637c4-413d-4e61-bef0-740ff2360374,Namespace:calico-system,Attempt:1,}" Sep 13 00:58:04.318962 systemd-networkd[1071]: calia599228491e: Gained IPv6LL Sep 13 00:58:04.483705 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:58:04.474891 systemd-networkd[1071]: cali2188de7e510: Link UP Sep 13 00:58:04.496092 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2188de7e510: link becomes ready Sep 13 00:58:04.497872 systemd-networkd[1071]: cali2188de7e510: Gained carrier Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.323 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0 coredns-7c65d6cfc9- kube-system 02a91fdd-1f7d-4977-ad95-07ea1dc01154 985 0 2025-09-13 00:57:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4 coredns-7c65d6cfc9-x5h4h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2188de7e510 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5h4h" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.324 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5h4h" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.386 [INFO][4228] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" HandleID="k8s-pod-network.fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.386 [INFO][4228] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" HandleID="k8s-pod-network.fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9270), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", "pod":"coredns-7c65d6cfc9-x5h4h", "timestamp":"2025-09-13 00:58:04.386310169 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.386 [INFO][4228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.386 [INFO][4228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.386 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.399 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.412 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.420 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.424 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.427 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.427 [INFO][4228] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.429 [INFO][4228] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92 Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.435 [INFO][4228] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.450 [INFO][4228] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.6/26] block=192.168.106.0/26 handle="k8s-pod-network.fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.450 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.6/26] handle="k8s-pod-network.fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.450 [INFO][4228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:04.546416 env[1325]: 2025-09-13 00:58:04.450 [INFO][4228] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.6/26] IPv6=[] ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" HandleID="k8s-pod-network.fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:04.547828 env[1325]: 2025-09-13 00:58:04.457 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5h4h" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02a91fdd-1f7d-4977-ad95-07ea1dc01154", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"", Pod:"coredns-7c65d6cfc9-x5h4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2188de7e510", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:04.547828 env[1325]: 2025-09-13 00:58:04.458 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.6/32] ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5h4h" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:04.547828 env[1325]: 2025-09-13 00:58:04.458 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2188de7e510 ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5h4h" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:04.547828 env[1325]: 2025-09-13 00:58:04.499 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5h4h" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:04.547828 env[1325]: 2025-09-13 00:58:04.500 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5h4h" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02a91fdd-1f7d-4977-ad95-07ea1dc01154", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92", Pod:"coredns-7c65d6cfc9-x5h4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2188de7e510", MAC:"aa:30:21:35:03:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:04.547828 env[1325]: 2025-09-13 00:58:04.537 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x5h4h" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:04.600383 systemd-networkd[1071]: calid381f404b4d: Link UP Sep 13 00:58:04.616736 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid381f404b4d: link becomes ready Sep 13 00:58:04.619180 systemd-networkd[1071]: calid381f404b4d: Gained carrier Sep 13 00:58:04.649952 env[1325]: time="2025-09-13T00:58:04.649820542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.313 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0 goldmane-7988f88666- calico-system 7a8637c4-413d-4e61-bef0-740ff2360374 986 0 2025-09-13 00:57:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4 goldmane-7988f88666-mqrbm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid381f404b4d [] [] }} ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Namespace="calico-system" Pod="goldmane-7988f88666-mqrbm" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.313 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Namespace="calico-system" Pod="goldmane-7988f88666-mqrbm" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.409 [INFO][4223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" HandleID="k8s-pod-network.7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.410 [INFO][4223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" HandleID="k8s-pod-network.7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8490), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", "pod":"goldmane-7988f88666-mqrbm", "timestamp":"2025-09-13 00:58:04.409497843 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.411 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.454 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.454 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.499 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.519 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.525 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.536 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.545 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.545 [INFO][4223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.550 [INFO][4223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.566 [INFO][4223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.579 [INFO][4223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.7/26] block=192.168.106.0/26 handle="k8s-pod-network.7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.579 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.7/26] handle="k8s-pod-network.7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.579 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:04.650282 env[1325]: 2025-09-13 00:58:04.579 [INFO][4223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.7/26] IPv6=[] ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" HandleID="k8s-pod-network.7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.651424 env[1325]: 2025-09-13 00:58:04.583 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Namespace="calico-system" Pod="goldmane-7988f88666-mqrbm" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7a8637c4-413d-4e61-bef0-740ff2360374", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"", Pod:"goldmane-7988f88666-mqrbm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid381f404b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:04.651424 env[1325]: 2025-09-13 00:58:04.583 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.7/32] ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Namespace="calico-system" Pod="goldmane-7988f88666-mqrbm" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.651424 env[1325]: 2025-09-13 00:58:04.583 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid381f404b4d ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Namespace="calico-system" Pod="goldmane-7988f88666-mqrbm" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.651424 env[1325]: 2025-09-13 00:58:04.626 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Namespace="calico-system" Pod="goldmane-7988f88666-mqrbm" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.651424 env[1325]: 2025-09-13 00:58:04.627 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Namespace="calico-system" Pod="goldmane-7988f88666-mqrbm" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7a8637c4-413d-4e61-bef0-740ff2360374", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf", Pod:"goldmane-7988f88666-mqrbm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid381f404b4d", MAC:"f6:4b:9e:74:cc:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:04.651424 env[1325]: 2025-09-13 00:58:04.643 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf" Namespace="calico-system" Pod="goldmane-7988f88666-mqrbm" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:04.670790 env[1325]: time="2025-09-13T00:58:04.654598645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:58:04.670790 env[1325]: time="2025-09-13T00:58:04.654707136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:58:04.670790 env[1325]: time="2025-09-13T00:58:04.655222878Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92 pid=4256 runtime=io.containerd.runc.v2 Sep 13 00:58:04.727965 env[1325]: time="2025-09-13T00:58:04.727908540Z" level=info msg="StopPodSandbox for \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\"" Sep 13 00:58:04.797445 env[1325]: time="2025-09-13T00:58:04.790797609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:58:04.797445 env[1325]: time="2025-09-13T00:58:04.790852931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:58:04.797445 env[1325]: time="2025-09-13T00:58:04.790874105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:58:04.797445 env[1325]: time="2025-09-13T00:58:04.791128486Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf pid=4305 runtime=io.containerd.runc.v2 Sep 13 00:58:04.833000 audit[4327]: NETFILTER_CFG table=filter:113 family=2 entries=86 op=nft_register_chain pid=4327 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:58:04.844821 kernel: kauditd_printk_skb: 577 callbacks suppressed Sep 13 00:58:04.844893 kernel: audit: type=1325 audit(1757725084.833:399): table=filter:113 family=2 entries=86 op=nft_register_chain pid=4327 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:58:04.833000 audit[4327]: SYSCALL arch=c000003e syscall=46 success=yes exit=46648 a0=3 a1=7ffc41696db0 a2=0 a3=7ffc41696d9c items=0 ppid=3499 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:04.901095 kernel: audit: type=1300 audit(1757725084.833:399): arch=c000003e syscall=46 success=yes exit=46648 a0=3 a1=7ffc41696db0 a2=0 a3=7ffc41696d9c items=0 ppid=3499 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:04.901250 kernel: audit: type=1327 audit(1757725084.833:399): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:58:04.833000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:58:04.932289 env[1325]: time="2025-09-13T00:58:04.931904043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x5h4h,Uid:02a91fdd-1f7d-4977-ad95-07ea1dc01154,Namespace:kube-system,Attempt:1,} returns sandbox id \"fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92\"" Sep 13 00:58:04.977309 env[1325]: time="2025-09-13T00:58:04.977250542Z" level=info msg="CreateContainer within sandbox \"fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:58:05.082784 env[1325]: time="2025-09-13T00:58:05.075764997Z" level=info msg="CreateContainer within sandbox \"fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ea384606c6cdd80c49422ca47672047adf96dd5ee3c3e9b7f3044287ca5f64c\"" Sep 13 00:58:05.085992 env[1325]: time="2025-09-13T00:58:05.083410690Z" level=info msg="StartContainer for \"0ea384606c6cdd80c49422ca47672047adf96dd5ee3c3e9b7f3044287ca5f64c\"" Sep 13 00:58:05.088997 env[1325]: time="2025-09-13T00:58:05.088940311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-mqrbm,Uid:7a8637c4-413d-4e61-bef0-740ff2360374,Namespace:calico-system,Attempt:1,} returns sandbox id \"7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf\"" Sep 13 00:58:05.271576 systemd[1]: run-containerd-runc-k8s.io-0ea384606c6cdd80c49422ca47672047adf96dd5ee3c3e9b7f3044287ca5f64c-runc.ewlPCI.mount: Deactivated successfully. Sep 13 00:58:05.353382 env[1325]: time="2025-09-13T00:58:05.350220954Z" level=info msg="StartContainer for \"0ea384606c6cdd80c49422ca47672047adf96dd5ee3c3e9b7f3044287ca5f64c\" returns successfully" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.097 [INFO][4329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.097 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" iface="eth0" netns="/var/run/netns/cni-663d6338-b577-376f-7243-4495ffcc448a" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.097 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" iface="eth0" netns="/var/run/netns/cni-663d6338-b577-376f-7243-4495ffcc448a" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.097 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" iface="eth0" netns="/var/run/netns/cni-663d6338-b577-376f-7243-4495ffcc448a" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.097 [INFO][4329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.097 [INFO][4329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.317 [INFO][4356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" HandleID="k8s-pod-network.44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.318 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.318 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.328 [WARNING][4356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" HandleID="k8s-pod-network.44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.328 [INFO][4356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" HandleID="k8s-pod-network.44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.346 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:05.384711 env[1325]: 2025-09-13 00:58:05.359 [INFO][4329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:05.384711 env[1325]: time="2025-09-13T00:58:05.377770947Z" level=info msg="TearDown network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\" successfully" Sep 13 00:58:05.384711 env[1325]: time="2025-09-13T00:58:05.377820975Z" level=info msg="StopPodSandbox for \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\" returns successfully" Sep 13 00:58:05.384711 env[1325]: time="2025-09-13T00:58:05.378764401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjjfb,Uid:6497f54b-f081-4e3e-89dc-fe9b1d7d52c2,Namespace:calico-system,Attempt:1,}" Sep 13 00:58:05.388070 systemd[1]: run-netns-cni\x2d663d6338\x2db577\x2d376f\x2d7243\x2d4495ffcc448a.mount: Deactivated successfully. Sep 13 00:58:05.629851 systemd-networkd[1071]: cali5f13e73f1f7: Link UP Sep 13 00:58:05.645720 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:58:05.645839 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5f13e73f1f7: link becomes ready Sep 13 00:58:05.649001 systemd-networkd[1071]: cali5f13e73f1f7: Gained carrier Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.489 [INFO][4397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0 csi-node-driver- calico-system 6497f54b-f081-4e3e-89dc-fe9b1d7d52c2 1001 0 2025-09-13 00:57:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4 csi-node-driver-bjjfb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5f13e73f1f7 [] [] }} ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Namespace="calico-system" Pod="csi-node-driver-bjjfb" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.489 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Namespace="calico-system" Pod="csi-node-driver-bjjfb" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.557 [INFO][4409] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" HandleID="k8s-pod-network.c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.557 [INFO][4409] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" HandleID="k8s-pod-network.c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b7490), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", "pod":"csi-node-driver-bjjfb", "timestamp":"2025-09-13 00:58:05.557070215 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.557 [INFO][4409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.559 [INFO][4409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.559 [INFO][4409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4' Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.569 [INFO][4409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.575 [INFO][4409] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.580 [INFO][4409] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.583 [INFO][4409] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.587 [INFO][4409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.587 [INFO][4409] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.589 [INFO][4409] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0 Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.597 [INFO][4409] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.609 [INFO][4409] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.8/26] block=192.168.106.0/26 handle="k8s-pod-network.c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.609 [INFO][4409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.8/26] handle="k8s-pod-network.c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" host="ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4" Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.609 [INFO][4409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:05.679848 env[1325]: 2025-09-13 00:58:05.609 [INFO][4409] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.8/26] IPv6=[] ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" HandleID="k8s-pod-network.c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.683177 env[1325]: 2025-09-13 00:58:05.618 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Namespace="calico-system" Pod="csi-node-driver-bjjfb" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"", Pod:"csi-node-driver-bjjfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f13e73f1f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:05.683177 env[1325]: 2025-09-13 00:58:05.618 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.8/32] ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Namespace="calico-system" Pod="csi-node-driver-bjjfb" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.683177 env[1325]: 2025-09-13 00:58:05.618 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f13e73f1f7 ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Namespace="calico-system" Pod="csi-node-driver-bjjfb" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.683177 env[1325]: 2025-09-13 00:58:05.657 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Namespace="calico-system" Pod="csi-node-driver-bjjfb" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.683177 env[1325]: 2025-09-13 00:58:05.658 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Namespace="calico-system" Pod="csi-node-driver-bjjfb" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0", Pod:"csi-node-driver-bjjfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f13e73f1f7", MAC:"3a:ca:da:83:16:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:05.683177 env[1325]: 2025-09-13 00:58:05.677 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0" Namespace="calico-system" Pod="csi-node-driver-bjjfb" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:05.734680 kernel: audit: type=1325 audit(1757725085.713:400): table=filter:114 family=2 entries=52 op=nft_register_chain pid=4425 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:58:05.713000 audit[4425]: NETFILTER_CFG table=filter:114 family=2 entries=52 op=nft_register_chain pid=4425 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:58:05.713000 audit[4425]: SYSCALL arch=c000003e syscall=46 success=yes exit=24296 a0=3 a1=7ffeddc45750 a2=0 a3=7ffeddc4573c items=0 ppid=3499 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:05.773036 kernel: audit: type=1300 audit(1757725085.713:400): arch=c000003e syscall=46 success=yes exit=24296 a0=3 a1=7ffeddc45750 a2=0 a3=7ffeddc4573c items=0 ppid=3499 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:05.713000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:58:05.794719 kernel: audit: type=1327 audit(1757725085.713:400): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:58:05.826227 env[1325]: time="2025-09-13T00:58:05.826119194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:58:05.826227 env[1325]: time="2025-09-13T00:58:05.826187465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:58:05.826976 env[1325]: time="2025-09-13T00:58:05.826205785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:58:05.827347 env[1325]: time="2025-09-13T00:58:05.827290994Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0 pid=4433 runtime=io.containerd.runc.v2 Sep 13 00:58:05.920328 env[1325]: time="2025-09-13T00:58:05.920255980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjjfb,Uid:6497f54b-f081-4e3e-89dc-fe9b1d7d52c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0\"" Sep 13 00:58:06.046924 systemd-networkd[1071]: cali2188de7e510: Gained IPv6LL Sep 13 00:58:06.110796 systemd-networkd[1071]: calid381f404b4d: Gained IPv6LL Sep 13 00:58:06.203772 kubelet[2221]: I0913 00:58:06.202889 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-x5h4h" podStartSLOduration=46.202860878 podStartE2EDuration="46.202860878s" podCreationTimestamp="2025-09-13 00:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:58:06.199167177 +0000 UTC m=+52.785661196" watchObservedRunningTime="2025-09-13 00:58:06.202860878 +0000 UTC m=+52.789354899" Sep 13 00:58:06.269000 audit[4471]: NETFILTER_CFG table=filter:115 family=2 entries=12 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:06.288651 kernel: audit: type=1325 audit(1757725086.269:401): table=filter:115 family=2 entries=12 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:06.269000 audit[4471]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd3c183760 a2=0 a3=7ffd3c18374c items=0 ppid=2341 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:06.331723 kernel: audit: type=1300 audit(1757725086.269:401): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd3c183760 a2=0 a3=7ffd3c18374c items=0 ppid=2341 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:06.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:06.357780 kernel: audit: type=1327 audit(1757725086.269:401): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:06.294000 audit[4471]: NETFILTER_CFG table=nat:116 family=2 entries=46 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:06.378462 kernel: audit: type=1325 audit(1757725086.294:402): table=nat:116 family=2 entries=46 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:06.294000 audit[4471]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffd3c183760 a2=0 a3=7ffd3c18374c items=0 ppid=2341 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:06.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:06.403000 audit[4473]: NETFILTER_CFG table=filter:117 family=2 entries=12 op=nft_register_rule pid=4473 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:06.403000 audit[4473]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffecfe92a30 a2=0 a3=7ffecfe92a1c items=0 ppid=2341 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:06.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:06.425000 audit[4473]: NETFILTER_CFG table=nat:118 family=2 entries=58 op=nft_register_chain pid=4473 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:06.425000 audit[4473]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7ffecfe92a30 a2=0 a3=7ffecfe92a1c items=0 ppid=2341 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:06.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:06.942827 systemd-networkd[1071]: cali5f13e73f1f7: Gained IPv6LL Sep 13 00:58:07.136841 env[1325]: time="2025-09-13T00:58:07.136781568Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:07.141531 env[1325]: time="2025-09-13T00:58:07.141479634Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:07.145233 env[1325]: time="2025-09-13T00:58:07.145189345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:07.151499 env[1325]: time="2025-09-13T00:58:07.151450181Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:07.152803 env[1325]: time="2025-09-13T00:58:07.152748200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:58:07.158023 env[1325]: time="2025-09-13T00:58:07.157972113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:58:07.161532 env[1325]: time="2025-09-13T00:58:07.161486426Z" level=info msg="CreateContainer within sandbox \"9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:58:07.203111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135649146.mount: Deactivated successfully. Sep 13 00:58:07.209771 env[1325]: time="2025-09-13T00:58:07.209713490Z" level=info msg="CreateContainer within sandbox \"9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5057f1252905505fc0c2921b43fc9de375bcfb130106ad90467806a97491dea3\"" Sep 13 00:58:07.211067 env[1325]: time="2025-09-13T00:58:07.211024390Z" level=info msg="StartContainer for \"5057f1252905505fc0c2921b43fc9de375bcfb130106ad90467806a97491dea3\"" Sep 13 00:58:07.298343 systemd[1]: run-containerd-runc-k8s.io-5057f1252905505fc0c2921b43fc9de375bcfb130106ad90467806a97491dea3-runc.u3cK48.mount: Deactivated successfully. Sep 13 00:58:07.391409 env[1325]: time="2025-09-13T00:58:07.391342157Z" level=info msg="StartContainer for \"5057f1252905505fc0c2921b43fc9de375bcfb130106ad90467806a97491dea3\" returns successfully" Sep 13 00:58:08.257000 audit[4514]: NETFILTER_CFG table=filter:119 family=2 entries=12 op=nft_register_rule pid=4514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:08.257000 audit[4514]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff4df55770 a2=0 a3=7fff4df5575c items=0 ppid=2341 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:08.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:08.262000 audit[4514]: NETFILTER_CFG table=nat:120 family=2 entries=22 op=nft_register_rule pid=4514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:08.262000 audit[4514]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff4df55770 a2=0 a3=7fff4df5575c items=0 ppid=2341 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:08.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:09.195659 kubelet[2221]: I0913 00:58:09.195137 2221 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:58:11.752173 env[1325]: time="2025-09-13T00:58:11.752100917Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:11.922525 env[1325]: time="2025-09-13T00:58:11.922462513Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:11.927854 env[1325]: time="2025-09-13T00:58:11.927791922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:11.930264 env[1325]: time="2025-09-13T00:58:11.930215937Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:11.931070 env[1325]: time="2025-09-13T00:58:11.931016556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:58:11.933125 env[1325]: time="2025-09-13T00:58:11.933088075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:58:11.960320 env[1325]: time="2025-09-13T00:58:11.960245707Z" level=info msg="CreateContainer within sandbox \"6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:58:11.980643 env[1325]: time="2025-09-13T00:58:11.980572205Z" level=info msg="CreateContainer within sandbox \"6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ab775a9ab22d814a697807faed069826d3a9d33ea762e31694a2064097f0b329\"" Sep 13 00:58:11.982944 env[1325]: time="2025-09-13T00:58:11.981761177Z" level=info msg="StartContainer for \"ab775a9ab22d814a697807faed069826d3a9d33ea762e31694a2064097f0b329\"" Sep 13 00:58:12.095222 env[1325]: time="2025-09-13T00:58:12.094471938Z" level=info msg="StartContainer for \"ab775a9ab22d814a697807faed069826d3a9d33ea762e31694a2064097f0b329\" returns successfully" Sep 13 00:58:12.140091 env[1325]: time="2025-09-13T00:58:12.140037336Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:12.145942 env[1325]: time="2025-09-13T00:58:12.145430931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:12.150092 env[1325]: time="2025-09-13T00:58:12.148674136Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:12.152421 env[1325]: time="2025-09-13T00:58:12.152351419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:12.153478 env[1325]: time="2025-09-13T00:58:12.153407380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:58:12.158245 env[1325]: time="2025-09-13T00:58:12.156304586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:58:12.158245 env[1325]: time="2025-09-13T00:58:12.157991746Z" level=info msg="CreateContainer within sandbox \"2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:58:12.178685 env[1325]: time="2025-09-13T00:58:12.178598946Z" level=info msg="CreateContainer within sandbox \"2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e640329c06b266e9a137a610683b81029b0f72430ab9d630995fc2d83c27e5b\"" Sep 13 00:58:12.179882 env[1325]: time="2025-09-13T00:58:12.179824344Z" level=info msg="StartContainer for \"1e640329c06b266e9a137a610683b81029b0f72430ab9d630995fc2d83c27e5b\"" Sep 13 00:58:12.253639 kubelet[2221]: I0913 00:58:12.253549 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b845c7695-sr7sp" podStartSLOduration=26.181030471 podStartE2EDuration="35.253523919s" podCreationTimestamp="2025-09-13 00:57:37 +0000 UTC" firstStartedPulling="2025-09-13 00:58:02.860020721 +0000 UTC m=+49.446514714" lastFinishedPulling="2025-09-13 00:58:11.932514151 +0000 UTC m=+58.519008162" observedRunningTime="2025-09-13 00:58:12.252271604 +0000 UTC m=+58.838765625" watchObservedRunningTime="2025-09-13 00:58:12.253523919 +0000 UTC m=+58.840017939" Sep 13 00:58:12.268066 kubelet[2221]: I0913 00:58:12.267980 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-649475f784-nsdkw" podStartSLOduration=35.869013207 podStartE2EDuration="40.267953257s" podCreationTimestamp="2025-09-13 00:57:32 +0000 UTC" firstStartedPulling="2025-09-13 00:58:02.755816759 +0000 UTC m=+49.342310755" lastFinishedPulling="2025-09-13 00:58:07.154756789 +0000 UTC m=+53.741250805" observedRunningTime="2025-09-13 00:58:08.221332632 +0000 UTC m=+54.807826651" watchObservedRunningTime="2025-09-13 00:58:12.267953257 +0000 UTC m=+58.854447278" Sep 13 00:58:12.422330 env[1325]: time="2025-09-13T00:58:12.422280800Z" level=info msg="StartContainer for \"1e640329c06b266e9a137a610683b81029b0f72430ab9d630995fc2d83c27e5b\" returns successfully" Sep 13 00:58:13.122059 kubelet[2221]: I0913 00:58:13.121140 2221 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:58:13.319123 kernel: kauditd_printk_skb: 14 callbacks suppressed Sep 13 00:58:13.319325 kernel: audit: type=1325 audit(1757725093.296:407): table=filter:121 family=2 entries=11 op=nft_register_rule pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:13.296000 audit[4630]: NETFILTER_CFG table=filter:121 family=2 entries=11 op=nft_register_rule pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:13.319508 kubelet[2221]: I0913 00:58:13.319211 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-649475f784-qqkdv" podStartSLOduration=32.066774453 podStartE2EDuration="41.319169156s" podCreationTimestamp="2025-09-13 00:57:32 +0000 UTC" firstStartedPulling="2025-09-13 00:58:02.902967139 +0000 UTC m=+49.489461148" lastFinishedPulling="2025-09-13 00:58:12.155361837 +0000 UTC m=+58.741855851" observedRunningTime="2025-09-13 00:58:13.318454679 +0000 UTC m=+59.904948698" watchObservedRunningTime="2025-09-13 00:58:13.319169156 +0000 UTC m=+59.905663189" Sep 13 00:58:13.374676 kernel: audit: type=1300 audit(1757725093.296:407): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc4e0a6060 a2=0 a3=7ffc4e0a604c items=0 ppid=2341 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:13.296000 audit[4630]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc4e0a6060 a2=0 a3=7ffc4e0a604c items=0 ppid=2341 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:13.427870 kernel: audit: type=1327 audit(1757725093.296:407): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:13.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:13.457299 kernel: audit: type=1325 audit(1757725093.335:408): table=nat:122 family=2 entries=29 op=nft_register_chain pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:13.335000 audit[4630]: NETFILTER_CFG table=nat:122 family=2 entries=29 op=nft_register_chain pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:13.506905 kernel: audit: type=1300 audit(1757725093.335:408): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffc4e0a6060 a2=0 a3=7ffc4e0a604c items=0 ppid=2341 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:13.335000 audit[4630]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffc4e0a6060 a2=0 a3=7ffc4e0a604c items=0 ppid=2341 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:13.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:13.548636 kernel: audit: type=1327 audit(1757725093.335:408): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:13.576689 kernel: audit: type=1325 audit(1757725093.436:409): table=filter:123 family=2 entries=10 op=nft_register_rule pid=4632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:13.436000 audit[4632]: NETFILTER_CFG table=filter:123 family=2 entries=10 op=nft_register_rule pid=4632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:13.436000 audit[4632]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffecc47e750 a2=0 a3=7ffecc47e73c items=0 ppid=2341 pid=4632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:13.612880 kernel: audit: type=1300 audit(1757725093.436:409): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffecc47e750 a2=0 a3=7ffecc47e73c items=0 ppid=2341 pid=4632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:13.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:13.630633 kernel: audit: type=1327 audit(1757725093.436:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:13.442000 audit[4632]: NETFILTER_CFG table=nat:124 family=2 entries=32 op=nft_register_rule pid=4632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:13.442000 audit[4632]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffecc47e750 a2=0 a3=7ffecc47e73c items=0 ppid=2341 pid=4632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:13.649640 kernel: audit: type=1325 audit(1757725093.442:410): table=nat:124 family=2 entries=32 op=nft_register_rule pid=4632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:13.442000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:13.746842 env[1325]: time="2025-09-13T00:58:13.746758072Z" level=info msg="StopPodSandbox for \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\"" Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:13.906 [WARNING][4646] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0", GenerateName:"calico-apiserver-649475f784-", Namespace:"calico-apiserver", SelfLink:"", UID:"c749baa2-250c-406e-806c-5781eafb74e7", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649475f784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177", Pod:"calico-apiserver-649475f784-nsdkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia599228491e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:13.906 [INFO][4646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:13.906 [INFO][4646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" iface="eth0" netns="" Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:13.906 [INFO][4646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:13.907 [INFO][4646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:14.011 [INFO][4653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" HandleID="k8s-pod-network.1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:14.013 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:14.013 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:14.023 [WARNING][4653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" HandleID="k8s-pod-network.1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:14.023 [INFO][4653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" HandleID="k8s-pod-network.1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:14.033 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:14.042376 env[1325]: 2025-09-13 00:58:14.036 [INFO][4646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:14.043409 env[1325]: time="2025-09-13T00:58:14.043353152Z" level=info msg="TearDown network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\" successfully" Sep 13 00:58:14.043535 env[1325]: time="2025-09-13T00:58:14.043508912Z" level=info msg="StopPodSandbox for \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\" returns successfully" Sep 13 00:58:14.044408 env[1325]: time="2025-09-13T00:58:14.044371235Z" level=info msg="RemovePodSandbox for \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\"" Sep 13 00:58:14.044679 env[1325]: time="2025-09-13T00:58:14.044569539Z" level=info msg="Forcibly stopping sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\"" Sep 13 00:58:14.298599 kubelet[2221]: I0913 00:58:14.297318 2221 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.234 [WARNING][4669] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0", GenerateName:"calico-apiserver-649475f784-", Namespace:"calico-apiserver", SelfLink:"", UID:"c749baa2-250c-406e-806c-5781eafb74e7", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649475f784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"9fe94803be0d19f60c77032dde0bf7e24bc9092804aea168778d1b2939b89177", Pod:"calico-apiserver-649475f784-nsdkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia599228491e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.234 [INFO][4669] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.234 [INFO][4669] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" iface="eth0" netns="" Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.234 [INFO][4669] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.234 [INFO][4669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.287 [INFO][4676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" HandleID="k8s-pod-network.1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.288 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.288 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.300 [WARNING][4676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" HandleID="k8s-pod-network.1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.300 [INFO][4676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" HandleID="k8s-pod-network.1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--nsdkw-eth0" Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.302 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:14.305843 env[1325]: 2025-09-13 00:58:14.304 [INFO][4669] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45" Sep 13 00:58:14.307083 env[1325]: time="2025-09-13T00:58:14.307007387Z" level=info msg="TearDown network for sandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\" successfully" Sep 13 00:58:14.313965 env[1325]: time="2025-09-13T00:58:14.313915448Z" level=info msg="RemovePodSandbox \"1b74dcc1a2da2a13c759f3a15e59b9394a7dc575cb47f8826fa9069bea46cd45\" returns successfully" Sep 13 00:58:14.314896 env[1325]: time="2025-09-13T00:58:14.314861327Z" level=info msg="StopPodSandbox for \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\"" Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.401 [WARNING][4692] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.402 [INFO][4692] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.402 [INFO][4692] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" iface="eth0" netns="" Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.402 [INFO][4692] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.402 [INFO][4692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.481 [INFO][4699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" HandleID="k8s-pod-network.1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.481 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.481 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.491 [WARNING][4699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" HandleID="k8s-pod-network.1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.491 [INFO][4699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" HandleID="k8s-pod-network.1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.494 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:14.498257 env[1325]: 2025-09-13 00:58:14.496 [INFO][4692] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:58:14.499060 env[1325]: time="2025-09-13T00:58:14.498303763Z" level=info msg="TearDown network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\" successfully" Sep 13 00:58:14.499060 env[1325]: time="2025-09-13T00:58:14.498346175Z" level=info msg="StopPodSandbox for \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\" returns successfully" Sep 13 00:58:14.499060 env[1325]: time="2025-09-13T00:58:14.498985693Z" level=info msg="RemovePodSandbox for \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\"" Sep 13 00:58:14.499237 env[1325]: time="2025-09-13T00:58:14.499031202Z" level=info msg="Forcibly stopping sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\"" Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.588 [WARNING][4715] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" WorkloadEndpoint="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.588 [INFO][4715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.588 [INFO][4715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" iface="eth0" netns="" Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.588 [INFO][4715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.588 [INFO][4715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.639 [INFO][4722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" HandleID="k8s-pod-network.1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.639 [INFO][4722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.639 [INFO][4722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.650 [WARNING][4722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" HandleID="k8s-pod-network.1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.650 [INFO][4722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" HandleID="k8s-pod-network.1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-whisker--bd8b687f--9jrzd-eth0" Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.652 [INFO][4722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:14.655904 env[1325]: 2025-09-13 00:58:14.653 [INFO][4715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87" Sep 13 00:58:14.656904 env[1325]: time="2025-09-13T00:58:14.656851360Z" level=info msg="TearDown network for sandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\" successfully" Sep 13 00:58:14.662189 env[1325]: time="2025-09-13T00:58:14.662139924Z" level=info msg="RemovePodSandbox \"1b7c5053daf5c566b3a83e360f318bc60e45fc0e2c82719725d2a1da4a0eff87\" returns successfully" Sep 13 00:58:14.663105 env[1325]: time="2025-09-13T00:58:14.663068527Z" level=info msg="StopPodSandbox for \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\"" Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.767 [WARNING][4738] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0", GenerateName:"calico-apiserver-649475f784-", Namespace:"calico-apiserver", SelfLink:"", UID:"5482a9ae-642d-42d0-b694-214ca0591875", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649475f784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb", Pod:"calico-apiserver-649475f784-qqkdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4122fbf5ed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.767 [INFO][4738] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.767 [INFO][4738] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" iface="eth0" netns="" Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.767 [INFO][4738] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.767 [INFO][4738] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.814 [INFO][4746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" HandleID="k8s-pod-network.6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.815 [INFO][4746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.815 [INFO][4746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.826 [WARNING][4746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" HandleID="k8s-pod-network.6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.827 [INFO][4746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" HandleID="k8s-pod-network.6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.830 [INFO][4746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:14.845921 env[1325]: 2025-09-13 00:58:14.833 [INFO][4738] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:14.847538 env[1325]: time="2025-09-13T00:58:14.845956037Z" level=info msg="TearDown network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\" successfully" Sep 13 00:58:14.847538 env[1325]: time="2025-09-13T00:58:14.845997395Z" level=info msg="StopPodSandbox for \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\" returns successfully" Sep 13 00:58:14.847538 env[1325]: time="2025-09-13T00:58:14.846581723Z" level=info msg="RemovePodSandbox for \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\"" Sep 13 00:58:14.847538 env[1325]: time="2025-09-13T00:58:14.846657675Z" level=info msg="Forcibly stopping sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\"" Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:14.947 [WARNING][4762] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0", GenerateName:"calico-apiserver-649475f784-", Namespace:"calico-apiserver", SelfLink:"", UID:"5482a9ae-642d-42d0-b694-214ca0591875", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649475f784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"2b990fddd1be1c9ddb3550e0db6fa4f09c0db1c206876e03fdd1a93d46c825cb", Pod:"calico-apiserver-649475f784-qqkdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4122fbf5ed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:14.948 [INFO][4762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:14.948 [INFO][4762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" iface="eth0" netns="" Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:14.948 [INFO][4762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:14.948 [INFO][4762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:15.017 [INFO][4769] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" HandleID="k8s-pod-network.6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:15.017 [INFO][4769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:15.018 [INFO][4769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:15.026 [WARNING][4769] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" HandleID="k8s-pod-network.6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:15.026 [INFO][4769] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" HandleID="k8s-pod-network.6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--apiserver--649475f784--qqkdv-eth0" Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:15.028 [INFO][4769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:15.033205 env[1325]: 2025-09-13 00:58:15.030 [INFO][4762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce" Sep 13 00:58:15.034296 env[1325]: time="2025-09-13T00:58:15.034241004Z" level=info msg="TearDown network for sandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\" successfully" Sep 13 00:58:15.039973 env[1325]: time="2025-09-13T00:58:15.039918028Z" level=info msg="RemovePodSandbox \"6e0be0b2c624be6db9080fae5188e76a188b1e2c9b78f4d3e73a780c656d67ce\" returns successfully" Sep 13 00:58:15.040963 env[1325]: time="2025-09-13T00:58:15.040925335Z" level=info msg="StopPodSandbox for \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\"" Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.125 [WARNING][4785] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"820afca3-f77c-4bac-b219-c18864653831", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa", Pod:"coredns-7c65d6cfc9-28jg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b538b57bc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.126 [INFO][4785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.126 [INFO][4785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" iface="eth0" netns="" Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.126 [INFO][4785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.126 [INFO][4785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.171 [INFO][4792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" HandleID="k8s-pod-network.19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.171 [INFO][4792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.171 [INFO][4792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.183 [WARNING][4792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" HandleID="k8s-pod-network.19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.183 [INFO][4792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" HandleID="k8s-pod-network.19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.188 [INFO][4792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:15.193503 env[1325]: 2025-09-13 00:58:15.191 [INFO][4785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:58:15.194712 env[1325]: time="2025-09-13T00:58:15.194664287Z" level=info msg="TearDown network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\" successfully" Sep 13 00:58:15.194856 env[1325]: time="2025-09-13T00:58:15.194825999Z" level=info msg="StopPodSandbox for \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\" returns successfully" Sep 13 00:58:15.195708 env[1325]: time="2025-09-13T00:58:15.195670686Z" level=info msg="RemovePodSandbox for \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\"" Sep 13 00:58:15.195914 env[1325]: time="2025-09-13T00:58:15.195852904Z" level=info msg="Forcibly stopping sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\"" Sep 13 00:58:15.333000 audit[4815]: NETFILTER_CFG table=filter:125 family=2 entries=10 op=nft_register_rule pid=4815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:15.333000 audit[4815]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc1c8f8ea0 a2=0 a3=7ffc1c8f8e8c items=0 ppid=2341 pid=4815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:15.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:15.340000 audit[4815]: NETFILTER_CFG table=nat:126 family=2 entries=36 op=nft_register_chain pid=4815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:15.340000 audit[4815]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffc1c8f8ea0 a2=0 a3=7ffc1c8f8e8c items=0 ppid=2341 pid=4815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:15.340000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.344 [WARNING][4808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"820afca3-f77c-4bac-b219-c18864653831", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"9e767cfef0f40e78ff9a0b440c37549193025fae896d0f6875953dd262e024fa", Pod:"coredns-7c65d6cfc9-28jg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b538b57bc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.344 [INFO][4808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.344 [INFO][4808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" iface="eth0" netns="" Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.344 [INFO][4808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.345 [INFO][4808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.393 [INFO][4817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" HandleID="k8s-pod-network.19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.393 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.393 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.404 [WARNING][4817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" HandleID="k8s-pod-network.19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.404 [INFO][4817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" HandleID="k8s-pod-network.19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--28jg5-eth0" Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.406 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:15.412554 env[1325]: 2025-09-13 00:58:15.409 [INFO][4808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d" Sep 13 00:58:15.413599 env[1325]: time="2025-09-13T00:58:15.412581318Z" level=info msg="TearDown network for sandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\" successfully" Sep 13 00:58:15.417431 env[1325]: time="2025-09-13T00:58:15.417372925Z" level=info msg="RemovePodSandbox \"19631062f05c35768248ea39c96eecef05a36eded8c9ce0fbfe6c614d920995d\" returns successfully" Sep 13 00:58:15.418009 env[1325]: time="2025-09-13T00:58:15.417969142Z" level=info msg="StopPodSandbox for \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\"" Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.518 [WARNING][4834] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0", Pod:"csi-node-driver-bjjfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f13e73f1f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.519 [INFO][4834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.519 [INFO][4834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" iface="eth0" netns="" Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.519 [INFO][4834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.519 [INFO][4834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.576 [INFO][4841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" HandleID="k8s-pod-network.44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.577 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.577 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.587 [WARNING][4841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" HandleID="k8s-pod-network.44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.587 [INFO][4841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" HandleID="k8s-pod-network.44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.590 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:15.596815 env[1325]: 2025-09-13 00:58:15.593 [INFO][4834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:15.597862 env[1325]: time="2025-09-13T00:58:15.597805908Z" level=info msg="TearDown network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\" successfully" Sep 13 00:58:15.597994 env[1325]: time="2025-09-13T00:58:15.597965971Z" level=info msg="StopPodSandbox for \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\" returns successfully" Sep 13 00:58:15.598878 env[1325]: time="2025-09-13T00:58:15.598840358Z" level=info msg="RemovePodSandbox for \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\"" Sep 13 00:58:15.599110 env[1325]: time="2025-09-13T00:58:15.599053087Z" level=info msg="Forcibly stopping sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\"" Sep 13 00:58:15.615352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176349878.mount: Deactivated successfully. Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.678 [WARNING][4856] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6497f54b-f081-4e3e-89dc-fe9b1d7d52c2", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0", Pod:"csi-node-driver-bjjfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f13e73f1f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.678 [INFO][4856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.678 [INFO][4856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" iface="eth0" netns="" Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.678 [INFO][4856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.678 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.733 [INFO][4863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" HandleID="k8s-pod-network.44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.733 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.733 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.744 [WARNING][4863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" HandleID="k8s-pod-network.44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.745 [INFO][4863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" HandleID="k8s-pod-network.44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-csi--node--driver--bjjfb-eth0" Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.747 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:15.771668 env[1325]: 2025-09-13 00:58:15.749 [INFO][4856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66" Sep 13 00:58:15.773071 env[1325]: time="2025-09-13T00:58:15.771712911Z" level=info msg="TearDown network for sandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\" successfully" Sep 13 00:58:15.777669 env[1325]: time="2025-09-13T00:58:15.777593683Z" level=info msg="RemovePodSandbox \"44c35c4957dd0720503141ae8b7300a37c4e704ab9115c371d51b2ef2bf0de66\" returns successfully" Sep 13 00:58:15.778372 env[1325]: time="2025-09-13T00:58:15.778330448Z" level=info msg="StopPodSandbox for \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\"" Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.860 [WARNING][4881] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02a91fdd-1f7d-4977-ad95-07ea1dc01154", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92", Pod:"coredns-7c65d6cfc9-x5h4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2188de7e510", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.861 [INFO][4881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.861 [INFO][4881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" iface="eth0" netns="" Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.861 [INFO][4881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.861 [INFO][4881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.915 [INFO][4888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" HandleID="k8s-pod-network.9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.916 [INFO][4888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.916 [INFO][4888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.926 [WARNING][4888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" HandleID="k8s-pod-network.9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.927 [INFO][4888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" HandleID="k8s-pod-network.9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.929 [INFO][4888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:15.934440 env[1325]: 2025-09-13 00:58:15.932 [INFO][4881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:15.935829 env[1325]: time="2025-09-13T00:58:15.934443193Z" level=info msg="TearDown network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\" successfully" Sep 13 00:58:15.935829 env[1325]: time="2025-09-13T00:58:15.934487489Z" level=info msg="StopPodSandbox for \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\" returns successfully" Sep 13 00:58:15.935829 env[1325]: time="2025-09-13T00:58:15.935268862Z" level=info msg="RemovePodSandbox for \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\"" Sep 13 00:58:15.935829 env[1325]: time="2025-09-13T00:58:15.935313563Z" level=info msg="Forcibly stopping sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\"" Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.029 [WARNING][4904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02a91fdd-1f7d-4977-ad95-07ea1dc01154", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"fbc3974fcea112d08ffb3897b086f448b0f3a8f806eea5712de46260d2b6af92", Pod:"coredns-7c65d6cfc9-x5h4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2188de7e510", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.030 [INFO][4904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.030 [INFO][4904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" iface="eth0" netns="" Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.030 [INFO][4904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.031 [INFO][4904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.090 [INFO][4911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" HandleID="k8s-pod-network.9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.091 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.091 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.112 [WARNING][4911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" HandleID="k8s-pod-network.9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.113 [INFO][4911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" HandleID="k8s-pod-network.9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-coredns--7c65d6cfc9--x5h4h-eth0" Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.116 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:16.120096 env[1325]: 2025-09-13 00:58:16.118 [INFO][4904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12" Sep 13 00:58:16.121056 env[1325]: time="2025-09-13T00:58:16.120154037Z" level=info msg="TearDown network for sandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\" successfully" Sep 13 00:58:16.125675 env[1325]: time="2025-09-13T00:58:16.125585533Z" level=info msg="RemovePodSandbox \"9ba56898479cf37a7538f8461f1d97c610d22260780e529e8664a129d3c7aa12\" returns successfully" Sep 13 00:58:16.126585 env[1325]: time="2025-09-13T00:58:16.126549703Z" level=info msg="StopPodSandbox for \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\"" Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.222 [WARNING][4925] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7a8637c4-413d-4e61-bef0-740ff2360374", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf", Pod:"goldmane-7988f88666-mqrbm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid381f404b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.222 [INFO][4925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.222 [INFO][4925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" iface="eth0" netns="" Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.222 [INFO][4925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.222 [INFO][4925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.275 [INFO][4932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" HandleID="k8s-pod-network.7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.276 [INFO][4932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.276 [INFO][4932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.286 [WARNING][4932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" HandleID="k8s-pod-network.7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.286 [INFO][4932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" HandleID="k8s-pod-network.7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.288 [INFO][4932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:16.295213 env[1325]: 2025-09-13 00:58:16.290 [INFO][4925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:16.296254 env[1325]: time="2025-09-13T00:58:16.296176225Z" level=info msg="TearDown network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\" successfully" Sep 13 00:58:16.296417 env[1325]: time="2025-09-13T00:58:16.296375632Z" level=info msg="StopPodSandbox for \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\" returns successfully" Sep 13 00:58:16.297288 env[1325]: time="2025-09-13T00:58:16.297251992Z" level=info msg="RemovePodSandbox for \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\"" Sep 13 00:58:16.297505 env[1325]: time="2025-09-13T00:58:16.297445970Z" level=info msg="Forcibly stopping sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\"" Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.417 [WARNING][4948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7a8637c4-413d-4e61-bef0-740ff2360374", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf", Pod:"goldmane-7988f88666-mqrbm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid381f404b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.417 [INFO][4948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.417 [INFO][4948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" iface="eth0" netns="" Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.417 [INFO][4948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.417 [INFO][4948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.465 [INFO][4955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" HandleID="k8s-pod-network.7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.466 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.466 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.486 [WARNING][4955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" HandleID="k8s-pod-network.7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.486 [INFO][4955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" HandleID="k8s-pod-network.7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-goldmane--7988f88666--mqrbm-eth0" Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.488 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:16.492408 env[1325]: 2025-09-13 00:58:16.490 [INFO][4948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f" Sep 13 00:58:16.492408 env[1325]: time="2025-09-13T00:58:16.492372154Z" level=info msg="TearDown network for sandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\" successfully" Sep 13 00:58:16.497644 env[1325]: time="2025-09-13T00:58:16.497567798Z" level=info msg="RemovePodSandbox \"7e73aa57620a32848846028c62564d71d019d9f228898ae7a7f2f4fa8b78ee0f\" returns successfully" Sep 13 00:58:16.498336 env[1325]: time="2025-09-13T00:58:16.498290956Z" level=info msg="StopPodSandbox for \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\"" Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.575 [WARNING][4971] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0", GenerateName:"calico-kube-controllers-b845c7695-", Namespace:"calico-system", SelfLink:"", UID:"eca60e6b-177b-4588-8e7f-a2dc081264e1", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b845c7695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c", Pod:"calico-kube-controllers-b845c7695-sr7sp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia79c7d24088", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.576 [INFO][4971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.576 [INFO][4971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" iface="eth0" netns="" Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.576 [INFO][4971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.576 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.626 [INFO][4978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" HandleID="k8s-pod-network.4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.626 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.626 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.638 [WARNING][4978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" HandleID="k8s-pod-network.4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.638 [INFO][4978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" HandleID="k8s-pod-network.4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.640 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:16.645531 env[1325]: 2025-09-13 00:58:16.643 [INFO][4971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:16.646550 env[1325]: time="2025-09-13T00:58:16.646498628Z" level=info msg="TearDown network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\" successfully" Sep 13 00:58:16.646705 env[1325]: time="2025-09-13T00:58:16.646676256Z" level=info msg="StopPodSandbox for \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\" returns successfully" Sep 13 00:58:16.647406 env[1325]: time="2025-09-13T00:58:16.647370684Z" level=info msg="RemovePodSandbox for \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\"" Sep 13 00:58:16.647640 env[1325]: time="2025-09-13T00:58:16.647562081Z" level=info msg="Forcibly stopping sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\"" Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.731 [WARNING][4994] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0", GenerateName:"calico-kube-controllers-b845c7695-", Namespace:"calico-system", SelfLink:"", UID:"eca60e6b-177b-4588-8e7f-a2dc081264e1", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 57, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b845c7695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20250912-2100-1db970da00b806c00ba4", ContainerID:"6d3dfbd32f5531c1d9e9fd63d9c14ab28c831a06a0ece57d7fa153798a18430c", Pod:"calico-kube-controllers-b845c7695-sr7sp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia79c7d24088", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.732 [INFO][4994] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.732 [INFO][4994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" iface="eth0" netns="" Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.732 [INFO][4994] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.732 [INFO][4994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.784 [INFO][5001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" HandleID="k8s-pod-network.4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.784 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.785 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.795 [WARNING][5001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" HandleID="k8s-pod-network.4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.795 [INFO][5001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" HandleID="k8s-pod-network.4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Workload="ci--3510--3--8--nightly--20250912--2100--1db970da00b806c00ba4-k8s-calico--kube--controllers--b845c7695--sr7sp-eth0" Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.798 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:58:16.802028 env[1325]: 2025-09-13 00:58:16.800 [INFO][4994] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30" Sep 13 00:58:16.802912 env[1325]: time="2025-09-13T00:58:16.802077119Z" level=info msg="TearDown network for sandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\" successfully" Sep 13 00:58:16.807515 env[1325]: time="2025-09-13T00:58:16.807393326Z" level=info msg="RemovePodSandbox \"4b9322ba230f6ad11fa486e126845058a8550e8c97a49d62fb4503eca50e5d30\" returns successfully" Sep 13 00:58:17.002599 env[1325]: time="2025-09-13T00:58:17.001018608Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:17.005454 env[1325]: time="2025-09-13T00:58:17.005401751Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:17.007860 env[1325]: time="2025-09-13T00:58:17.007819489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:17.010147 env[1325]: time="2025-09-13T00:58:17.010108686Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:17.011477 env[1325]: time="2025-09-13T00:58:17.011431586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:58:17.015322 env[1325]: time="2025-09-13T00:58:17.014841593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:58:17.016357 env[1325]: time="2025-09-13T00:58:17.016296993Z" level=info msg="CreateContainer within sandbox \"7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:58:17.036891 env[1325]: time="2025-09-13T00:58:17.036817391Z" level=info msg="CreateContainer within sandbox \"7875454c8aca6393186b1f880b250ef5bd18eaf78092c49f431f2b1f2b0b01bf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2fa0d2e2a6e2bba89a1c1e9087ae1102cbdaa19a0e3105e8ae96b37693c152e5\"" Sep 13 00:58:17.039639 env[1325]: time="2025-09-13T00:58:17.037545848Z" level=info msg="StartContainer for \"2fa0d2e2a6e2bba89a1c1e9087ae1102cbdaa19a0e3105e8ae96b37693c152e5\"" Sep 13 00:58:17.152781 env[1325]: time="2025-09-13T00:58:17.152689803Z" level=info msg="StartContainer for \"2fa0d2e2a6e2bba89a1c1e9087ae1102cbdaa19a0e3105e8ae96b37693c152e5\" returns successfully" Sep 13 00:58:17.416896 kubelet[2221]: I0913 00:58:17.416192 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-mqrbm" podStartSLOduration=28.51330246 podStartE2EDuration="40.416162388s" podCreationTimestamp="2025-09-13 00:57:37 +0000 UTC" firstStartedPulling="2025-09-13 00:58:05.110161075 +0000 UTC m=+51.696655081" lastFinishedPulling="2025-09-13 00:58:17.013020998 +0000 UTC m=+63.599515009" observedRunningTime="2025-09-13 00:58:17.415778942 +0000 UTC m=+64.002272962" watchObservedRunningTime="2025-09-13 00:58:17.416162388 +0000 UTC m=+64.002656407" Sep 13 00:58:17.430000 audit[5043]: NETFILTER_CFG table=filter:127 family=2 entries=10 op=nft_register_rule pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:17.430000 audit[5043]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffde48e1120 a2=0 a3=7ffde48e110c items=0 ppid=2341 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:17.430000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:17.435000 audit[5043]: NETFILTER_CFG table=nat:128 family=2 entries=24 op=nft_register_rule pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:17.435000 audit[5043]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffde48e1120 a2=0 a3=7ffde48e110c items=0 ppid=2341 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:17.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:18.153643 systemd[1]: run-containerd-runc-k8s.io-2fa0d2e2a6e2bba89a1c1e9087ae1102cbdaa19a0e3105e8ae96b37693c152e5-runc.0BokcV.mount: Deactivated successfully. Sep 13 00:58:18.569354 env[1325]: time="2025-09-13T00:58:18.569198636Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:18.572818 env[1325]: time="2025-09-13T00:58:18.572767019Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:18.575868 env[1325]: time="2025-09-13T00:58:18.575822665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:18.579123 env[1325]: time="2025-09-13T00:58:18.579074041Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:18.580130 env[1325]: time="2025-09-13T00:58:18.580078839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:58:18.584480 env[1325]: time="2025-09-13T00:58:18.584421733Z" level=info msg="CreateContainer within sandbox \"c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:58:18.616839 env[1325]: time="2025-09-13T00:58:18.616773467Z" level=info msg="CreateContainer within sandbox \"c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a9541a5ae5bb67e3da860455c4f47e5d93da863a5abcb15318c21cdb24cad2c2\"" Sep 13 00:58:18.618054 env[1325]: time="2025-09-13T00:58:18.618006741Z" level=info msg="StartContainer for \"a9541a5ae5bb67e3da860455c4f47e5d93da863a5abcb15318c21cdb24cad2c2\"" Sep 13 00:58:18.859951 env[1325]: time="2025-09-13T00:58:18.859811168Z" level=info msg="StartContainer for \"a9541a5ae5bb67e3da860455c4f47e5d93da863a5abcb15318c21cdb24cad2c2\" returns successfully" Sep 13 00:58:18.862138 env[1325]: time="2025-09-13T00:58:18.862091875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:58:20.022137 systemd[1]: run-containerd-runc-k8s.io-9252380b4d39f4e3c6af2a1f27be3344e127336b6bb7930869c9a33eb5c8dc72-runc.sGPOxz.mount: Deactivated successfully. Sep 13 00:58:20.509029 systemd[1]: run-containerd-runc-k8s.io-9252380b4d39f4e3c6af2a1f27be3344e127336b6bb7930869c9a33eb5c8dc72-runc.YOQYK7.mount: Deactivated successfully. Sep 13 00:58:20.713555 env[1325]: time="2025-09-13T00:58:20.713484444Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:20.716077 env[1325]: time="2025-09-13T00:58:20.716030597Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:20.718230 env[1325]: time="2025-09-13T00:58:20.718191258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:20.720217 env[1325]: time="2025-09-13T00:58:20.720180528Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:58:20.721017 env[1325]: time="2025-09-13T00:58:20.720956620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:58:20.727153 env[1325]: time="2025-09-13T00:58:20.727105521Z" level=info msg="CreateContainer within sandbox \"c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:58:20.753697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779794987.mount: Deactivated successfully. Sep 13 00:58:20.757770 env[1325]: time="2025-09-13T00:58:20.757706695Z" level=info msg="CreateContainer within sandbox \"c1217562f5a0f914aa8f5362d51365f11f36150ec41ec2e0c2075d7c1db882f0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3a16d01d261015493e04c90234bc598d69b80cf7d16372619bed8c053a473810\"" Sep 13 00:58:20.760080 env[1325]: time="2025-09-13T00:58:20.758790665Z" level=info msg="StartContainer for \"3a16d01d261015493e04c90234bc598d69b80cf7d16372619bed8c053a473810\"" Sep 13 00:58:20.841848 env[1325]: time="2025-09-13T00:58:20.841785740Z" level=info msg="StartContainer for \"3a16d01d261015493e04c90234bc598d69b80cf7d16372619bed8c053a473810\" returns successfully" Sep 13 00:58:21.848204 kubelet[2221]: I0913 00:58:21.848140 2221 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:58:21.848204 kubelet[2221]: I0913 00:58:21.848191 2221 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:58:25.796399 systemd[1]: Started sshd@7-10.128.0.69:22-139.178.68.195:53116.service. Sep 13 00:58:25.827300 kernel: kauditd_printk_skb: 14 callbacks suppressed Sep 13 00:58:25.827477 kernel: audit: type=1130 audit(1757725105.795:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.69:22-139.178.68.195:53116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:25.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.69:22-139.178.68.195:53116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:26.232000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.263889 kernel: audit: type=1101 audit(1757725106.232:416): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.263950 sshd[5247]: Accepted publickey for core from 139.178.68.195 port 53116 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:58:26.262000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.264815 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:58:26.289250 systemd[1]: Started session-8.scope. Sep 13 00:58:26.290942 kernel: audit: type=1103 audit(1757725106.262:417): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.290253 systemd-logind[1310]: New session 8 of user core. Sep 13 00:58:26.308752 kernel: audit: type=1006 audit(1757725106.262:418): pid=5247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Sep 13 00:58:26.262000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd77eb9fa0 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:26.262000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:26.354046 kernel: audit: type=1300 audit(1757725106.262:418): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd77eb9fa0 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:26.354234 kernel: audit: type=1327 audit(1757725106.262:418): proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:26.308000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.386756 kernel: audit: type=1105 audit(1757725106.308:419): pid=5247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.395589 kernel: audit: type=1103 audit(1757725106.313:420): pid=5250 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.313000 audit[5250]: CRED_ACQ pid=5250 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.724670 sshd[5247]: pam_unix(sshd:session): session closed for user core Sep 13 00:58:26.724000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.755222 systemd-logind[1310]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:58:26.757414 systemd[1]: sshd@7-10.128.0.69:22-139.178.68.195:53116.service: Deactivated successfully. Sep 13 00:58:26.758765 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:58:26.760160 kernel: audit: type=1106 audit(1757725106.724:421): pid=5247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.760928 systemd-logind[1310]: Removed session 8. Sep 13 00:58:26.750000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:26.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.69:22-139.178.68.195:53116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:26.793630 kernel: audit: type=1104 audit(1757725106.750:422): pid=5247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:31.786235 systemd[1]: Started sshd@8-10.128.0.69:22-139.178.68.195:41014.service. Sep 13 00:58:31.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.69:22-139.178.68.195:41014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:31.794592 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:58:31.794727 kernel: audit: type=1130 audit(1757725111.788:424): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.69:22-139.178.68.195:41014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:32.215000 audit[5280]: USER_ACCT pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.217930 sshd[5280]: Accepted publickey for core from 139.178.68.195 port 41014 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:58:32.246659 kernel: audit: type=1101 audit(1757725112.215:425): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.247259 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:58:32.245000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.256677 systemd-logind[1310]: New session 9 of user core. Sep 13 00:58:32.260057 systemd[1]: Started session-9.scope. Sep 13 00:58:32.283051 kernel: audit: type=1103 audit(1757725112.245:426): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.301387 kernel: audit: type=1006 audit(1757725112.245:427): pid=5280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 13 00:58:32.245000 audit[5280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5b1b47b0 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:32.330655 kernel: audit: type=1300 audit(1757725112.245:427): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5b1b47b0 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:32.245000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:32.289000 audit[5280]: USER_START pid=5280 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.373292 kernel: audit: type=1327 audit(1757725112.245:427): proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:32.373473 kernel: audit: type=1105 audit(1757725112.289:428): pid=5280 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.301000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.421647 kernel: audit: type=1103 audit(1757725112.301:429): pid=5283 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.755970 sshd[5280]: pam_unix(sshd:session): session closed for user core Sep 13 00:58:32.757000 audit[5280]: USER_END pid=5280 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.770991 systemd[1]: sshd@8-10.128.0.69:22-139.178.68.195:41014.service: Deactivated successfully. Sep 13 00:58:32.772412 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:58:32.776033 systemd-logind[1310]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:58:32.777724 systemd-logind[1310]: Removed session 9. Sep 13 00:58:32.807321 kernel: audit: type=1106 audit(1757725112.757:430): pid=5280 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.807510 kernel: audit: type=1104 audit(1757725112.757:431): pid=5280 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.757000 audit[5280]: CRED_DISP pid=5280 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:32.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.69:22-139.178.68.195:41014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:37.817677 systemd[1]: Started sshd@9-10.128.0.69:22-139.178.68.195:41026.service. Sep 13 00:58:37.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.69:22-139.178.68.195:41026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:37.826642 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:58:37.826802 kernel: audit: type=1130 audit(1757725117.818:433): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.69:22-139.178.68.195:41026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:38.239000 audit[5293]: USER_ACCT pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.241534 sshd[5293]: Accepted publickey for core from 139.178.68.195 port 41026 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:58:38.270666 kernel: audit: type=1101 audit(1757725118.239:434): pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.271061 sshd[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:58:38.269000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.290746 systemd-logind[1310]: New session 10 of user core. Sep 13 00:58:38.292056 systemd[1]: Started session-10.scope. Sep 13 00:58:38.301723 kernel: audit: type=1103 audit(1757725118.269:435): pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.269000 audit[5293]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff32610a0 a2=3 a3=0 items=0 ppid=1 pid=5293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:38.362435 kernel: audit: type=1006 audit(1757725118.269:436): pid=5293 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 13 00:58:38.362641 kernel: audit: type=1300 audit(1757725118.269:436): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff32610a0 a2=3 a3=0 items=0 ppid=1 pid=5293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:38.362714 kernel: audit: type=1327 audit(1757725118.269:436): proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:38.269000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:38.302000 audit[5293]: USER_START pid=5293 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.404751 kernel: audit: type=1105 audit(1757725118.302:437): pid=5293 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.305000 audit[5296]: CRED_ACQ pid=5296 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.429263 kernel: audit: type=1103 audit(1757725118.305:438): pid=5296 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.631562 sshd[5293]: pam_unix(sshd:session): session closed for user core Sep 13 00:58:38.633000 audit[5293]: USER_END pid=5293 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.666694 kernel: audit: type=1106 audit(1757725118.633:439): pid=5293 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.633000 audit[5293]: CRED_DISP pid=5293 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.669487 systemd[1]: sshd@9-10.128.0.69:22-139.178.68.195:41026.service: Deactivated successfully. Sep 13 00:58:38.670896 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:58:38.696056 kernel: audit: type=1104 audit(1757725118.633:440): pid=5293 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:38.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.69:22-139.178.68.195:41026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:38.692195 systemd-logind[1310]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:58:38.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.69:22-139.178.68.195:41040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:38.699341 systemd[1]: Started sshd@10-10.128.0.69:22-139.178.68.195:41040.service. Sep 13 00:58:38.705381 systemd-logind[1310]: Removed session 10. Sep 13 00:58:39.082000 audit[5307]: USER_ACCT pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:39.084406 sshd[5307]: Accepted publickey for core from 139.178.68.195 port 41040 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:58:39.085338 sshd[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:58:39.084000 audit[5307]: CRED_ACQ pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:39.084000 audit[5307]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff59c16020 a2=3 a3=0 items=0 ppid=1 pid=5307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:39.084000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:39.092178 systemd-logind[1310]: New session 11 of user core. Sep 13 00:58:39.092958 systemd[1]: Started session-11.scope. Sep 13 00:58:39.101000 audit[5307]: USER_START pid=5307 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:39.103000 audit[5310]: CRED_ACQ pid=5310 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:39.619752 sshd[5307]: pam_unix(sshd:session): session closed for user core Sep 13 00:58:39.620000 audit[5307]: USER_END pid=5307 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:39.621000 audit[5307]: CRED_DISP pid=5307 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:39.625253 systemd-logind[1310]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:58:39.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.69:22-139.178.68.195:41040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:39.627854 systemd[1]: sshd@10-10.128.0.69:22-139.178.68.195:41040.service: Deactivated successfully. Sep 13 00:58:39.629188 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:58:39.631578 systemd-logind[1310]: Removed session 11. Sep 13 00:58:39.680155 systemd[1]: Started sshd@11-10.128.0.69:22-139.178.68.195:41044.service. Sep 13 00:58:39.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.69:22-139.178.68.195:41044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:40.093000 audit[5323]: USER_ACCT pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:40.096092 sshd[5323]: Accepted publickey for core from 139.178.68.195 port 41044 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:58:40.096000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:40.097000 audit[5323]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc57116db0 a2=3 a3=0 items=0 ppid=1 pid=5323 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:40.097000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:40.100238 sshd[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:58:40.108948 systemd[1]: Started session-12.scope. Sep 13 00:58:40.111691 systemd-logind[1310]: New session 12 of user core. Sep 13 00:58:40.129000 audit[5323]: USER_START pid=5323 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:40.132000 audit[5326]: CRED_ACQ pid=5326 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:40.542076 sshd[5323]: pam_unix(sshd:session): session closed for user core Sep 13 00:58:40.543000 audit[5323]: USER_END pid=5323 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:40.543000 audit[5323]: CRED_DISP pid=5323 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:40.547558 systemd-logind[1310]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:58:40.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.69:22-139.178.68.195:41044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:40.549255 systemd[1]: sshd@11-10.128.0.69:22-139.178.68.195:41044.service: Deactivated successfully. Sep 13 00:58:40.550664 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:58:40.553261 systemd-logind[1310]: Removed session 12. Sep 13 00:58:45.604582 systemd[1]: Started sshd@12-10.128.0.69:22-139.178.68.195:48256.service. Sep 13 00:58:45.636403 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:58:45.636587 kernel: audit: type=1130 audit(1757725125.603:460): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.69:22-139.178.68.195:48256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:45.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.69:22-139.178.68.195:48256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:46.038000 audit[5341]: USER_ACCT pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.041506 sshd[5341]: Accepted publickey for core from 139.178.68.195 port 48256 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:58:46.069694 kernel: audit: type=1101 audit(1757725126.038:461): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.071060 sshd[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:58:46.083205 systemd[1]: Started session-13.scope. Sep 13 00:58:46.085460 systemd-logind[1310]: New session 13 of user core. Sep 13 00:58:46.068000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.160305 kernel: audit: type=1103 audit(1757725126.068:462): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.160482 kernel: audit: type=1006 audit(1757725126.068:463): pid=5341 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 13 00:58:46.068000 audit[5341]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9e561fe0 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:46.189719 kernel: audit: type=1300 audit(1757725126.068:463): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9e561fe0 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:46.068000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:46.233005 kernel: audit: type=1327 audit(1757725126.068:463): proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:46.233201 kernel: audit: type=1105 audit(1757725126.094:464): pid=5341 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.094000 audit[5341]: USER_START pid=5341 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.098000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.258635 kernel: audit: type=1103 audit(1757725126.098:465): pid=5345 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.508944 sshd[5341]: pam_unix(sshd:session): session closed for user core Sep 13 00:58:46.509000 audit[5341]: USER_END pid=5341 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.514959 systemd-logind[1310]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:58:46.518376 systemd[1]: sshd@12-10.128.0.69:22-139.178.68.195:48256.service: Deactivated successfully. Sep 13 00:58:46.519837 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:58:46.524239 systemd-logind[1310]: Removed session 13. Sep 13 00:58:46.543652 kernel: audit: type=1106 audit(1757725126.509:466): pid=5341 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.509000 audit[5341]: CRED_DISP pid=5341 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:46.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.69:22-139.178.68.195:48256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:46.602634 kernel: audit: type=1104 audit(1757725126.509:467): pid=5341 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:48.168368 systemd[1]: run-containerd-runc-k8s.io-2fa0d2e2a6e2bba89a1c1e9087ae1102cbdaa19a0e3105e8ae96b37693c152e5-runc.p5Nbnq.mount: Deactivated successfully. Sep 13 00:58:48.422630 kubelet[2221]: I0913 00:58:48.422416 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bjjfb" podStartSLOduration=56.621279845 podStartE2EDuration="1m11.422358175s" podCreationTimestamp="2025-09-13 00:57:37 +0000 UTC" firstStartedPulling="2025-09-13 00:58:05.922012431 +0000 UTC m=+52.508506440" lastFinishedPulling="2025-09-13 00:58:20.723090777 +0000 UTC m=+67.309584770" observedRunningTime="2025-09-13 00:58:21.45951545 +0000 UTC m=+68.046009470" watchObservedRunningTime="2025-09-13 00:58:48.422358175 +0000 UTC m=+95.008852195" Sep 13 00:58:48.472000 audit[5397]: NETFILTER_CFG table=filter:129 family=2 entries=9 op=nft_register_rule pid=5397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:48.472000 audit[5397]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd65ec4300 a2=0 a3=7ffd65ec42ec items=0 ppid=2341 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:48.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:48.478000 audit[5397]: NETFILTER_CFG table=nat:130 family=2 entries=31 op=nft_register_chain pid=5397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:58:48.478000 audit[5397]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffd65ec4300 a2=0 a3=7ffd65ec42ec items=0 ppid=2341 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:48.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:58:50.020288 systemd[1]: run-containerd-runc-k8s.io-9252380b4d39f4e3c6af2a1f27be3344e127336b6bb7930869c9a33eb5c8dc72-runc.2PF0mw.mount: Deactivated successfully. Sep 13 00:58:51.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.69:22-139.178.68.195:44880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:51.571782 systemd[1]: Started sshd@13-10.128.0.69:22-139.178.68.195:44880.service. Sep 13 00:58:51.579634 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:58:51.579820 kernel: audit: type=1130 audit(1757725131.571:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.69:22-139.178.68.195:44880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:51.984000 audit[5422]: USER_ACCT pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.015218 sshd[5422]: Accepted publickey for core from 139.178.68.195 port 44880 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:58:52.015755 kernel: audit: type=1101 audit(1757725131.984:472): pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.016453 sshd[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:58:52.026266 systemd[1]: Started session-14.scope. Sep 13 00:58:52.027558 systemd-logind[1310]: New session 14 of user core. Sep 13 00:58:52.015000 audit[5422]: CRED_ACQ pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.073228 kernel: audit: type=1103 audit(1757725132.015:473): pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.073392 kernel: audit: type=1006 audit(1757725132.015:474): pid=5422 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 13 00:58:52.015000 audit[5422]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9833a3f0 a2=3 a3=0 items=0 ppid=1 pid=5422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:52.015000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:52.111215 kernel: audit: type=1300 audit(1757725132.015:474): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9833a3f0 a2=3 a3=0 items=0 ppid=1 pid=5422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:52.111414 kernel: audit: type=1327 audit(1757725132.015:474): proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:52.111476 kernel: audit: type=1105 audit(1757725132.039:475): pid=5422 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.039000 audit[5422]: USER_START pid=5422 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.043000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.170661 kernel: audit: type=1103 audit(1757725132.043:476): pid=5424 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.437368 sshd[5422]: pam_unix(sshd:session): session closed for user core Sep 13 00:58:52.441000 audit[5422]: USER_END pid=5422 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.474847 kernel: audit: type=1106 audit(1757725132.441:477): pid=5422 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.478669 systemd-logind[1310]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:58:52.481385 systemd[1]: sshd@13-10.128.0.69:22-139.178.68.195:44880.service: Deactivated successfully. Sep 13 00:58:52.482740 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:58:52.487841 systemd-logind[1310]: Removed session 14. Sep 13 00:58:52.474000 audit[5422]: CRED_DISP pid=5422 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.546648 kernel: audit: type=1104 audit(1757725132.474:478): pid=5422 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:52.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.69:22-139.178.68.195:44880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:57.499020 systemd[1]: Started sshd@14-10.128.0.69:22-139.178.68.195:44896.service. Sep 13 00:58:57.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.69:22-139.178.68.195:44896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:57.504720 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:58:57.504886 kernel: audit: type=1130 audit(1757725137.498:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.69:22-139.178.68.195:44896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:58:57.944000 audit[5435]: USER_ACCT pid=5435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:57.966234 sshd[5435]: Accepted publickey for core from 139.178.68.195 port 44896 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:58:57.974000 audit[5435]: CRED_ACQ pid=5435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:57.976195 sshd[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:58:57.990480 systemd[1]: Started session-15.scope. Sep 13 00:58:57.996646 systemd-logind[1310]: New session 15 of user core. Sep 13 00:58:58.002602 kernel: audit: type=1101 audit(1757725137.944:481): pid=5435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.003453 kernel: audit: type=1103 audit(1757725137.974:482): pid=5435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.027775 kernel: audit: type=1006 audit(1757725137.974:483): pid=5435 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 13 00:58:57.974000 audit[5435]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2261fa30 a2=3 a3=0 items=0 ppid=1 pid=5435 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:58.057670 kernel: audit: type=1300 audit(1757725137.974:483): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2261fa30 a2=3 a3=0 items=0 ppid=1 pid=5435 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:58:58.057865 kernel: audit: type=1327 audit(1757725137.974:483): proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:57.974000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:58:58.016000 audit[5435]: USER_START pid=5435 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.100240 kernel: audit: type=1105 audit(1757725138.016:484): pid=5435 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.021000 audit[5438]: CRED_ACQ pid=5438 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.126681 kernel: audit: type=1103 audit(1757725138.021:485): pid=5438 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.399666 sshd[5435]: pam_unix(sshd:session): session closed for user core Sep 13 00:58:58.401000 audit[5435]: USER_END pid=5435 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.434635 kernel: audit: type=1106 audit(1757725138.401:486): pid=5435 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.436801 systemd-logind[1310]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:58:58.437047 systemd[1]: sshd@14-10.128.0.69:22-139.178.68.195:44896.service: Deactivated successfully. Sep 13 00:58:58.438523 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:58:58.422000 audit[5435]: CRED_DISP pid=5435 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.440107 systemd-logind[1310]: Removed session 15. Sep 13 00:58:58.464639 kernel: audit: type=1104 audit(1757725138.422:487): pid=5435 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:58:58.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.69:22-139.178.68.195:44896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:00.412224 systemd[1]: run-containerd-runc-k8s.io-ab775a9ab22d814a697807faed069826d3a9d33ea762e31694a2064097f0b329-runc.DlKfYt.mount: Deactivated successfully. Sep 13 00:59:03.462768 systemd[1]: Started sshd@15-10.128.0.69:22-139.178.68.195:59350.service. Sep 13 00:59:03.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.69:22-139.178.68.195:59350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:03.468532 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:59:03.468684 kernel: audit: type=1130 audit(1757725143.461:489): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.69:22-139.178.68.195:59350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:03.886000 audit[5466]: USER_ACCT pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:03.918693 kernel: audit: type=1101 audit(1757725143.886:490): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:03.920231 sshd[5466]: Accepted publickey for core from 139.178.68.195 port 59350 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:59:03.922051 sshd[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:59:03.938656 systemd[1]: Started session-16.scope. Sep 13 00:59:03.940409 systemd-logind[1310]: New session 16 of user core. Sep 13 00:59:03.919000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:03.977822 kernel: audit: type=1103 audit(1757725143.919:491): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.025911 kernel: audit: type=1006 audit(1757725143.919:492): pid=5466 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 13 00:59:03.919000 audit[5466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc5a2a430 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:04.072651 kernel: audit: type=1300 audit(1757725143.919:492): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc5a2a430 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:03.919000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:04.120654 kernel: audit: type=1327 audit(1757725143.919:492): proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:03.981000 audit[5466]: USER_START pid=5466 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.165647 kernel: audit: type=1105 audit(1757725143.981:493): pid=5466 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:03.986000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.202647 kernel: audit: type=1103 audit(1757725143.986:494): pid=5469 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.398939 sshd[5466]: pam_unix(sshd:session): session closed for user core Sep 13 00:59:04.400000 audit[5466]: USER_END pid=5466 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.407291 systemd-logind[1310]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:59:04.409868 systemd[1]: sshd@15-10.128.0.69:22-139.178.68.195:59350.service: Deactivated successfully. Sep 13 00:59:04.411181 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:59:04.413896 systemd-logind[1310]: Removed session 16. Sep 13 00:59:04.400000 audit[5466]: CRED_DISP pid=5466 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.458917 kernel: audit: type=1106 audit(1757725144.400:495): pid=5466 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.459081 kernel: audit: type=1104 audit(1757725144.400:496): pid=5466 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.69:22-139.178.68.195:59350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:04.467283 systemd[1]: Started sshd@16-10.128.0.69:22-139.178.68.195:59364.service. Sep 13 00:59:04.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.69:22-139.178.68.195:59364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:04.870000 audit[5478]: USER_ACCT pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.873837 sshd[5478]: Accepted publickey for core from 139.178.68.195 port 59364 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:59:04.873000 audit[5478]: CRED_ACQ pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.873000 audit[5478]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdebf54de0 a2=3 a3=0 items=0 ppid=1 pid=5478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:04.873000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:04.876057 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:59:04.884794 systemd[1]: Started session-17.scope. Sep 13 00:59:04.885678 systemd-logind[1310]: New session 17 of user core. Sep 13 00:59:04.894000 audit[5478]: USER_START pid=5478 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:04.897000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:05.446923 sshd[5478]: pam_unix(sshd:session): session closed for user core Sep 13 00:59:05.446000 audit[5478]: USER_END pid=5478 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:05.446000 audit[5478]: CRED_DISP pid=5478 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:05.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.69:22-139.178.68.195:59364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:05.452494 systemd[1]: sshd@16-10.128.0.69:22-139.178.68.195:59364.service: Deactivated successfully. Sep 13 00:59:05.454028 systemd-logind[1310]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:59:05.455583 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:59:05.456699 systemd-logind[1310]: Removed session 17. Sep 13 00:59:05.506245 systemd[1]: Started sshd@17-10.128.0.69:22-139.178.68.195:59370.service. Sep 13 00:59:05.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.69:22-139.178.68.195:59370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:05.913000 audit[5490]: USER_ACCT pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:05.917307 sshd[5490]: Accepted publickey for core from 139.178.68.195 port 59370 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:59:05.916000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:05.917000 audit[5490]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd80a8bd20 a2=3 a3=0 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:05.917000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:05.921598 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:59:05.933153 systemd[1]: Started session-18.scope. Sep 13 00:59:05.935183 systemd-logind[1310]: New session 18 of user core. Sep 13 00:59:05.948000 audit[5490]: USER_START pid=5490 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:05.951000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:09.636933 kernel: kauditd_printk_skb: 20 callbacks suppressed Sep 13 00:59:09.637192 kernel: audit: type=1325 audit(1757725149.614:513): table=filter:131 family=2 entries=20 op=nft_register_rule pid=5503 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:09.614000 audit[5503]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5503 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:09.614000 audit[5503]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffccc6bbfa0 a2=0 a3=7ffccc6bbf8c items=0 ppid=2341 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:09.678652 kernel: audit: type=1300 audit(1757725149.614:513): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffccc6bbfa0 a2=0 a3=7ffccc6bbf8c items=0 ppid=2341 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:09.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:09.695700 kernel: audit: type=1327 audit(1757725149.614:513): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:09.681176 sshd[5490]: pam_unix(sshd:session): session closed for user core Sep 13 00:59:09.643000 audit[5503]: NETFILTER_CFG table=nat:132 family=2 entries=26 op=nft_register_rule pid=5503 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:09.700951 systemd[1]: sshd@17-10.128.0.69:22-139.178.68.195:59370.service: Deactivated successfully. Sep 13 00:59:09.702295 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:59:09.703563 systemd-logind[1310]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:59:09.713946 kernel: audit: type=1325 audit(1757725149.643:514): table=nat:132 family=2 entries=26 op=nft_register_rule pid=5503 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:09.706890 systemd-logind[1310]: Removed session 18. Sep 13 00:59:09.718425 systemd[1]: Started sshd@18-10.128.0.69:22-139.178.68.195:59376.service. Sep 13 00:59:09.643000 audit[5503]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffccc6bbfa0 a2=0 a3=0 items=0 ppid=2341 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:09.761697 kernel: audit: type=1300 audit(1757725149.643:514): arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffccc6bbfa0 a2=0 a3=0 items=0 ppid=2341 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:09.643000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:09.778640 kernel: audit: type=1327 audit(1757725149.643:514): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:09.696000 audit[5490]: USER_END pid=5490 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:09.811650 kernel: audit: type=1106 audit(1757725149.696:515): pid=5490 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:09.696000 audit[5490]: CRED_DISP pid=5490 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:09.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.69:22-139.178.68.195:59370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:09.868113 kernel: audit: type=1104 audit(1757725149.696:516): pid=5490 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:09.868311 kernel: audit: type=1131 audit(1757725149.700:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.69:22-139.178.68.195:59370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:09.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.69:22-139.178.68.195:59376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:09.893641 kernel: audit: type=1130 audit(1757725149.718:518): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.69:22-139.178.68.195:59376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:09.817000 audit[5509]: NETFILTER_CFG table=filter:133 family=2 entries=32 op=nft_register_rule pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:09.817000 audit[5509]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffdb4213ea0 a2=0 a3=7ffdb4213e8c items=0 ppid=2341 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:09.817000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:09.890000 audit[5509]: NETFILTER_CFG table=nat:134 family=2 entries=26 op=nft_register_rule pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:09.890000 audit[5509]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffdb4213ea0 a2=0 a3=0 items=0 ppid=2341 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:09.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:10.150000 audit[5506]: USER_ACCT pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:10.153144 sshd[5506]: Accepted publickey for core from 139.178.68.195 port 59376 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:59:10.153000 audit[5506]: CRED_ACQ pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:10.153000 audit[5506]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe24dc8460 a2=3 a3=0 items=0 ppid=1 pid=5506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:10.153000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:10.154379 sshd[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:59:10.165042 systemd-logind[1310]: New session 19 of user core. Sep 13 00:59:10.166505 systemd[1]: Started session-19.scope. Sep 13 00:59:10.176000 audit[5506]: USER_START pid=5506 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:10.179000 audit[5511]: CRED_ACQ pid=5511 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:10.866931 sshd[5506]: pam_unix(sshd:session): session closed for user core Sep 13 00:59:10.868000 audit[5506]: USER_END pid=5506 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:10.869000 audit[5506]: CRED_DISP pid=5506 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:10.872950 systemd-logind[1310]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:59:10.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.69:22-139.178.68.195:59376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:10.876010 systemd[1]: sshd@18-10.128.0.69:22-139.178.68.195:59376.service: Deactivated successfully. Sep 13 00:59:10.877319 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:59:10.882083 systemd-logind[1310]: Removed session 19. Sep 13 00:59:10.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.69:22-139.178.68.195:36496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:10.925392 systemd[1]: Started sshd@19-10.128.0.69:22-139.178.68.195:36496.service. Sep 13 00:59:11.328000 audit[5519]: USER_ACCT pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:11.331988 sshd[5519]: Accepted publickey for core from 139.178.68.195 port 36496 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:59:11.332000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:11.332000 audit[5519]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4fdc7070 a2=3 a3=0 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:11.332000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:11.334575 sshd[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:59:11.344444 systemd[1]: Started session-20.scope. Sep 13 00:59:11.345037 systemd-logind[1310]: New session 20 of user core. Sep 13 00:59:11.357000 audit[5519]: USER_START pid=5519 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:11.360000 audit[5522]: CRED_ACQ pid=5522 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:11.740909 sshd[5519]: pam_unix(sshd:session): session closed for user core Sep 13 00:59:11.742000 audit[5519]: USER_END pid=5519 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:11.742000 audit[5519]: CRED_DISP pid=5519 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:11.746987 systemd-logind[1310]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:59:11.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.69:22-139.178.68.195:36496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:11.749565 systemd[1]: sshd@19-10.128.0.69:22-139.178.68.195:36496.service: Deactivated successfully. Sep 13 00:59:11.750968 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:59:11.754202 systemd-logind[1310]: Removed session 20. Sep 13 00:59:16.781264 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 13 00:59:16.781463 kernel: audit: type=1325 audit(1757725156.758:538): table=filter:135 family=2 entries=20 op=nft_register_rule pid=5535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:16.758000 audit[5535]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:16.758000 audit[5535]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffca87bd050 a2=0 a3=7ffca87bd03c items=0 ppid=2341 pid=5535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:16.820843 kernel: audit: type=1300 audit(1757725156.758:538): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffca87bd050 a2=0 a3=7ffca87bd03c items=0 ppid=2341 pid=5535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:16.825633 systemd[1]: Started sshd@20-10.128.0.69:22-139.178.68.195:36512.service. Sep 13 00:59:16.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:16.844648 kernel: audit: type=1327 audit(1757725156.758:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:16.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.69:22-139.178.68.195:36512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:16.870643 kernel: audit: type=1130 audit(1757725156.825:539): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.69:22-139.178.68.195:36512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:16.872000 audit[5535]: NETFILTER_CFG table=nat:136 family=2 entries=110 op=nft_register_chain pid=5535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:16.890636 kernel: audit: type=1325 audit(1757725156.872:540): table=nat:136 family=2 entries=110 op=nft_register_chain pid=5535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:59:16.872000 audit[5535]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffca87bd050 a2=0 a3=7ffca87bd03c items=0 ppid=2341 pid=5535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:16.924682 kernel: audit: type=1300 audit(1757725156.872:540): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffca87bd050 a2=0 a3=7ffca87bd03c items=0 ppid=2341 pid=5535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:16.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:16.941634 kernel: audit: type=1327 audit(1757725156.872:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:17.310000 audit[5536]: USER_ACCT pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:17.313176 sshd[5536]: Accepted publickey for core from 139.178.68.195 port 36512 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:59:17.341663 kernel: audit: type=1101 audit(1757725157.310:541): pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:17.342331 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:59:17.340000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:17.351269 systemd[1]: Started session-21.scope. Sep 13 00:59:17.352684 systemd-logind[1310]: New session 21 of user core. Sep 13 00:59:17.374640 kernel: audit: type=1103 audit(1757725157.340:542): pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:17.393654 kernel: audit: type=1006 audit(1757725157.340:543): pid=5536 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Sep 13 00:59:17.340000 audit[5536]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3cd15260 a2=3 a3=0 items=0 ppid=1 pid=5536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:17.340000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:17.374000 audit[5536]: USER_START pid=5536 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:17.379000 audit[5540]: CRED_ACQ pid=5540 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:17.785911 sshd[5536]: pam_unix(sshd:session): session closed for user core Sep 13 00:59:17.787000 audit[5536]: USER_END pid=5536 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:17.787000 audit[5536]: CRED_DISP pid=5536 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:17.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.69:22-139.178.68.195:36512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:17.789960 systemd[1]: sshd@20-10.128.0.69:22-139.178.68.195:36512.service: Deactivated successfully. Sep 13 00:59:17.791376 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:59:17.794414 systemd-logind[1310]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:59:17.799921 systemd-logind[1310]: Removed session 21. Sep 13 00:59:18.131703 systemd[1]: run-containerd-runc-k8s.io-2fa0d2e2a6e2bba89a1c1e9087ae1102cbdaa19a0e3105e8ae96b37693c152e5-runc.Ki2qhn.mount: Deactivated successfully. Sep 13 00:59:18.203431 systemd[1]: run-containerd-runc-k8s.io-ab775a9ab22d814a697807faed069826d3a9d33ea762e31694a2064097f0b329-runc.Cu1bUR.mount: Deactivated successfully. Sep 13 00:59:20.021414 systemd[1]: run-containerd-runc-k8s.io-9252380b4d39f4e3c6af2a1f27be3344e127336b6bb7930869c9a33eb5c8dc72-runc.xrYfuj.mount: Deactivated successfully. Sep 13 00:59:22.839317 systemd[1]: Started sshd@21-10.128.0.69:22-139.178.68.195:34836.service. Sep 13 00:59:22.870283 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:59:22.870458 kernel: audit: type=1130 audit(1757725162.838:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.69:22-139.178.68.195:34836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:22.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.69:22-139.178.68.195:34836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:23.258000 audit[5614]: USER_ACCT pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.290737 kernel: audit: type=1101 audit(1757725163.258:550): pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.291434 sshd[5614]: Accepted publickey for core from 139.178.68.195 port 34836 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:59:23.293168 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:59:23.290000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.319703 kernel: audit: type=1103 audit(1757725163.290:551): pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.325526 systemd-logind[1310]: New session 22 of user core. Sep 13 00:59:23.327034 systemd[1]: Started session-22.scope. Sep 13 00:59:23.336770 kernel: audit: type=1006 audit(1757725163.290:552): pid=5614 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 13 00:59:23.374689 kernel: audit: type=1300 audit(1757725163.290:552): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe93038830 a2=3 a3=0 items=0 ppid=1 pid=5614 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:23.290000 audit[5614]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe93038830 a2=3 a3=0 items=0 ppid=1 pid=5614 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:23.290000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:23.345000 audit[5614]: USER_START pid=5614 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.420738 kernel: audit: type=1327 audit(1757725163.290:552): proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:23.420922 kernel: audit: type=1105 audit(1757725163.345:553): pid=5614 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.376000 audit[5617]: CRED_ACQ pid=5617 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.445445 kernel: audit: type=1103 audit(1757725163.376:554): pid=5617 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.752429 sshd[5614]: pam_unix(sshd:session): session closed for user core Sep 13 00:59:23.752000 audit[5614]: USER_END pid=5614 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.791018 kernel: audit: type=1106 audit(1757725163.752:555): pid=5614 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.789484 systemd[1]: sshd@21-10.128.0.69:22-139.178.68.195:34836.service: Deactivated successfully. Sep 13 00:59:23.792182 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:59:23.793314 systemd-logind[1310]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:59:23.795499 systemd-logind[1310]: Removed session 22. Sep 13 00:59:23.752000 audit[5614]: CRED_DISP pid=5614 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:23.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.69:22-139.178.68.195:34836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:23.830645 kernel: audit: type=1104 audit(1757725163.752:556): pid=5614 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:28.820263 systemd[1]: Started sshd@22-10.128.0.69:22-139.178.68.195:34844.service. Sep 13 00:59:28.850725 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:59:28.850909 kernel: audit: type=1130 audit(1757725168.819:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.69:22-139.178.68.195:34844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:28.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.69:22-139.178.68.195:34844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:29.220000 audit[5631]: USER_ACCT pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.252394 sshd[5631]: Accepted publickey for core from 139.178.68.195 port 34844 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:59:29.252952 kernel: audit: type=1101 audit(1757725169.220:559): pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.254925 sshd[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:59:29.268580 systemd[1]: Started session-23.scope. Sep 13 00:59:29.269733 systemd-logind[1310]: New session 23 of user core. Sep 13 00:59:29.312803 kernel: audit: type=1103 audit(1757725169.252:560): pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.252000 audit[5631]: CRED_ACQ pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.252000 audit[5631]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2828d3b0 a2=3 a3=0 items=0 ppid=1 pid=5631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:29.361897 kernel: audit: type=1006 audit(1757725169.252:561): pid=5631 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 13 00:59:29.362085 kernel: audit: type=1300 audit(1757725169.252:561): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2828d3b0 a2=3 a3=0 items=0 ppid=1 pid=5631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:29.252000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:29.410227 kernel: audit: type=1327 audit(1757725169.252:561): proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:29.410413 kernel: audit: type=1105 audit(1757725169.312:562): pid=5631 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.312000 audit[5631]: USER_START pid=5631 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.315000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.440733 kernel: audit: type=1103 audit(1757725169.315:563): pid=5634 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.497406 systemd[1]: run-containerd-runc-k8s.io-2fa0d2e2a6e2bba89a1c1e9087ae1102cbdaa19a0e3105e8ae96b37693c152e5-runc.Aw5gh9.mount: Deactivated successfully. Sep 13 00:59:29.704964 sshd[5631]: pam_unix(sshd:session): session closed for user core Sep 13 00:59:29.705000 audit[5631]: USER_END pid=5631 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.734000 audit[5631]: CRED_DISP pid=5631 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.741779 systemd[1]: sshd@22-10.128.0.69:22-139.178.68.195:34844.service: Deactivated successfully. Sep 13 00:59:29.743131 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:59:29.764404 kernel: audit: type=1106 audit(1757725169.705:564): pid=5631 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.764590 kernel: audit: type=1104 audit(1757725169.734:565): pid=5631 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:29.767728 systemd-logind[1310]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:59:29.769734 systemd-logind[1310]: Removed session 23. Sep 13 00:59:29.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.69:22-139.178.68.195:34844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:34.759796 systemd[1]: Started sshd@23-10.128.0.69:22-139.178.68.195:51212.service. Sep 13 00:59:34.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.69:22-139.178.68.195:51212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:34.766217 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:59:34.766312 kernel: audit: type=1130 audit(1757725174.760:567): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.69:22-139.178.68.195:51212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:35.136000 audit[5677]: USER_ACCT pid=5677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.137386 sshd[5677]: Accepted publickey for core from 139.178.68.195 port 51212 ssh2: RSA SHA256:FcUh4BNE27e1kC0wUevabIQVoX+mPgnUAJiptYDOjtA Sep 13 00:59:35.166000 audit[5677]: CRED_ACQ pid=5677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.168064 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:59:35.178309 systemd[1]: Started session-24.scope. Sep 13 00:59:35.179678 systemd-logind[1310]: New session 24 of user core. Sep 13 00:59:35.192780 kernel: audit: type=1101 audit(1757725175.136:568): pid=5677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.192937 kernel: audit: type=1103 audit(1757725175.166:569): pid=5677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.194528 kernel: audit: type=1006 audit(1757725175.166:570): pid=5677 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 13 00:59:35.166000 audit[5677]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd531afe00 a2=3 a3=0 items=0 ppid=1 pid=5677 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:35.210648 kernel: audit: type=1300 audit(1757725175.166:570): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd531afe00 a2=3 a3=0 items=0 ppid=1 pid=5677 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:59:35.166000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:35.238676 kernel: audit: type=1327 audit(1757725175.166:570): proctitle=737368643A20636F7265205B707269765D Sep 13 00:59:35.194000 audit[5677]: USER_START pid=5677 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.202000 audit[5680]: CRED_ACQ pid=5680 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.308451 kernel: audit: type=1105 audit(1757725175.194:571): pid=5677 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.308680 kernel: audit: type=1103 audit(1757725175.202:572): pid=5680 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.499282 sshd[5677]: pam_unix(sshd:session): session closed for user core Sep 13 00:59:35.501000 audit[5677]: USER_END pid=5677 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.534633 kernel: audit: type=1106 audit(1757725175.501:573): pid=5677 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.501000 audit[5677]: CRED_DISP pid=5677 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 13 00:59:35.536551 systemd[1]: sshd@23-10.128.0.69:22-139.178.68.195:51212.service: Deactivated successfully. Sep 13 00:59:35.538997 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:59:35.541670 systemd-logind[1310]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:59:35.543421 systemd-logind[1310]: Removed session 24. Sep 13 00:59:35.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.69:22-139.178.68.195:51212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:35.560766 kernel: audit: type=1104 audit(1757725175.501:574): pid=5677 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success'