Jul 15 11:36:56.846346 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Jul 15 10:04:37 -00 2025 Jul 15 11:36:56.846363 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:36:56.846373 kernel: BIOS-provided physical RAM map: Jul 15 11:36:56.846379 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 11:36:56.846384 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 15 11:36:56.846389 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 15 11:36:56.846396 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 15 11:36:56.846402 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 15 11:36:56.846407 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 15 11:36:56.846414 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 15 11:36:56.846419 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 15 11:36:56.846424 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 15 11:36:56.846430 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 15 11:36:56.846436 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 15 11:36:56.846442 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 15 11:36:56.846450 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 15 11:36:56.846455 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 15 11:36:56.846461 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 11:36:56.846467 kernel: NX (Execute Disable) protection: active Jul 15 11:36:56.846473 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Jul 15 11:36:56.846479 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Jul 15 11:36:56.846485 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Jul 15 11:36:56.846490 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Jul 15 11:36:56.846496 kernel: extended physical RAM map: Jul 15 11:36:56.846502 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 11:36:56.846509 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 15 11:36:56.846514 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 15 11:36:56.846520 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 15 11:36:56.846526 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 15 11:36:56.846532 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 15 11:36:56.846538 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 15 11:36:56.846543 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Jul 15 11:36:56.846549 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Jul 15 11:36:56.846555 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Jul 15 11:36:56.846561 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Jul 15 11:36:56.846566 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Jul 15 11:36:56.846573 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 15 11:36:56.846579 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 15 11:36:56.846585 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 15 11:36:56.846591 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 15 11:36:56.846599 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 15 11:36:56.846606 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 15 11:36:56.846612 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 11:36:56.846620 kernel: efi: EFI v2.70 by EDK II Jul 15 11:36:56.846626 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Jul 15 11:36:56.846632 kernel: random: crng init done Jul 15 11:36:56.846639 kernel: SMBIOS 2.8 present. Jul 15 11:36:56.846645 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 15 11:36:56.846651 kernel: Hypervisor detected: KVM Jul 15 11:36:56.846657 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 11:36:56.846664 kernel: kvm-clock: cpu 0, msr 2319b001, primary cpu clock Jul 15 11:36:56.846670 kernel: kvm-clock: using sched offset of 3948280633 cycles Jul 15 11:36:56.846678 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 11:36:56.846685 kernel: tsc: Detected 2794.750 MHz processor Jul 15 11:36:56.846691 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 11:36:56.846698 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 11:36:56.846704 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 15 11:36:56.846721 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 11:36:56.846728 kernel: Using GB pages for direct mapping Jul 15 11:36:56.846735 kernel: Secure boot disabled Jul 15 11:36:56.846741 kernel: ACPI: Early table checksum verification disabled Jul 15 11:36:56.846750 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 15 11:36:56.846756 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 15 11:36:56.846763 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:36:56.846769 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:36:56.846776 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 15 11:36:56.846782 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:36:56.846789 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:36:56.846795 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:36:56.846802 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:36:56.846809 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 15 11:36:56.846816 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 15 11:36:56.846822 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 15 11:36:56.846829 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 15 11:36:56.846835 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 15 11:36:56.846841 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 15 11:36:56.846848 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 15 11:36:56.846854 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 15 11:36:56.846860 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 15 11:36:56.846868 kernel: No NUMA configuration found Jul 15 11:36:56.846875 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 15 11:36:56.846881 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 15 11:36:56.846896 kernel: Zone ranges: Jul 15 11:36:56.846903 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 11:36:56.846909 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 15 11:36:56.846915 kernel: Normal empty Jul 15 11:36:56.846922 kernel: Movable zone start for each node Jul 15 11:36:56.846928 kernel: Early memory node ranges Jul 15 11:36:56.846936 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 15 11:36:56.846942 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 15 11:36:56.846948 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 15 11:36:56.846955 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 15 11:36:56.846961 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 15 11:36:56.846968 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 15 11:36:56.846974 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 15 11:36:56.846980 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 11:36:56.846987 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 15 11:36:56.846993 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 15 11:36:56.847001 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 11:36:56.847007 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 15 11:36:56.847014 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 15 11:36:56.847020 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 15 11:36:56.847026 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 11:36:56.847033 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 11:36:56.847039 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 11:36:56.847046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 11:36:56.847052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 11:36:56.847060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 11:36:56.847066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 11:36:56.847072 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 11:36:56.847079 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 11:36:56.847085 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 11:36:56.847092 kernel: TSC deadline timer available Jul 15 11:36:56.847098 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 15 11:36:56.847104 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 11:36:56.847111 kernel: kvm-guest: setup PV sched yield Jul 15 11:36:56.847118 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 15 11:36:56.847125 kernel: Booting paravirtualized kernel on KVM Jul 15 11:36:56.847136 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 11:36:56.847144 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 15 11:36:56.847151 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 15 11:36:56.847158 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 15 11:36:56.847164 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 11:36:56.847171 kernel: kvm-guest: setup async PF for cpu 0 Jul 15 11:36:56.847178 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Jul 15 11:36:56.847184 kernel: kvm-guest: PV spinlocks enabled Jul 15 11:36:56.847191 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 11:36:56.847198 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 15 11:36:56.847206 kernel: Policy zone: DMA32 Jul 15 11:36:56.847214 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:36:56.847221 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 11:36:56.847228 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 11:36:56.847236 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 11:36:56.847243 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 11:36:56.847250 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 169308K reserved, 0K cma-reserved) Jul 15 11:36:56.847257 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 11:36:56.847264 kernel: ftrace: allocating 34607 entries in 136 pages Jul 15 11:36:56.847271 kernel: ftrace: allocated 136 pages with 2 groups Jul 15 11:36:56.847277 kernel: rcu: Hierarchical RCU implementation. Jul 15 11:36:56.847290 kernel: rcu: RCU event tracing is enabled. Jul 15 11:36:56.847297 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 11:36:56.847304 kernel: Rude variant of Tasks RCU enabled. Jul 15 11:36:56.847311 kernel: Tracing variant of Tasks RCU enabled. Jul 15 11:36:56.847318 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 11:36:56.847325 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 11:36:56.847332 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 11:36:56.847338 kernel: Console: colour dummy device 80x25 Jul 15 11:36:56.847345 kernel: printk: console [ttyS0] enabled Jul 15 11:36:56.847352 kernel: ACPI: Core revision 20210730 Jul 15 11:36:56.847359 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 11:36:56.847367 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 11:36:56.847373 kernel: x2apic enabled Jul 15 11:36:56.847380 kernel: Switched APIC routing to physical x2apic. Jul 15 11:36:56.847387 kernel: kvm-guest: setup PV IPIs Jul 15 11:36:56.847394 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 11:36:56.847401 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 15 11:36:56.847407 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 11:36:56.847414 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 11:36:56.847421 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 11:36:56.847429 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 11:36:56.847436 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 11:36:56.847442 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 11:36:56.847449 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 11:36:56.847456 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 11:36:56.847463 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 11:36:56.847470 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 11:36:56.847477 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 15 11:36:56.847483 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 11:36:56.847491 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 11:36:56.847498 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 11:36:56.847505 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 11:36:56.847512 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 11:36:56.847518 kernel: Freeing SMP alternatives memory: 32K Jul 15 11:36:56.847525 kernel: pid_max: default: 32768 minimum: 301 Jul 15 11:36:56.847532 kernel: LSM: Security Framework initializing Jul 15 11:36:56.847538 kernel: SELinux: Initializing. Jul 15 11:36:56.847545 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:36:56.847553 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:36:56.847560 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 11:36:56.847567 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 11:36:56.847574 kernel: ... version: 0 Jul 15 11:36:56.847580 kernel: ... bit width: 48 Jul 15 11:36:56.847587 kernel: ... generic registers: 6 Jul 15 11:36:56.847594 kernel: ... value mask: 0000ffffffffffff Jul 15 11:36:56.847600 kernel: ... max period: 00007fffffffffff Jul 15 11:36:56.847607 kernel: ... fixed-purpose events: 0 Jul 15 11:36:56.847615 kernel: ... event mask: 000000000000003f Jul 15 11:36:56.847621 kernel: signal: max sigframe size: 1776 Jul 15 11:36:56.847628 kernel: rcu: Hierarchical SRCU implementation. Jul 15 11:36:56.847635 kernel: smp: Bringing up secondary CPUs ... Jul 15 11:36:56.847641 kernel: x86: Booting SMP configuration: Jul 15 11:36:56.847648 kernel: .... node #0, CPUs: #1 Jul 15 11:36:56.847654 kernel: kvm-clock: cpu 1, msr 2319b041, secondary cpu clock Jul 15 11:36:56.847661 kernel: kvm-guest: setup async PF for cpu 1 Jul 15 11:36:56.847668 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Jul 15 11:36:56.847675 kernel: #2 Jul 15 11:36:56.847682 kernel: kvm-clock: cpu 2, msr 2319b081, secondary cpu clock Jul 15 11:36:56.847689 kernel: kvm-guest: setup async PF for cpu 2 Jul 15 11:36:56.847696 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Jul 15 11:36:56.847703 kernel: #3 Jul 15 11:36:56.847709 kernel: kvm-clock: cpu 3, msr 2319b0c1, secondary cpu clock Jul 15 11:36:56.847725 kernel: kvm-guest: setup async PF for cpu 3 Jul 15 11:36:56.847732 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Jul 15 11:36:56.847739 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 11:36:56.847745 kernel: smpboot: Max logical packages: 1 Jul 15 11:36:56.847754 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 11:36:56.847761 kernel: devtmpfs: initialized Jul 15 11:36:56.847768 kernel: x86/mm: Memory block size: 128MB Jul 15 11:36:56.847774 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 15 11:36:56.847781 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 15 11:36:56.847788 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 15 11:36:56.847795 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 15 11:36:56.847802 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 15 11:36:56.847808 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 11:36:56.847816 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 11:36:56.847823 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 11:36:56.847830 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 11:36:56.847836 kernel: audit: initializing netlink subsys (disabled) Jul 15 11:36:56.847843 kernel: audit: type=2000 audit(1752579416.446:1): state=initialized audit_enabled=0 res=1 Jul 15 11:36:56.847850 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 11:36:56.847857 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 11:36:56.847864 kernel: cpuidle: using governor menu Jul 15 11:36:56.847870 kernel: ACPI: bus type PCI registered Jul 15 11:36:56.847878 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 11:36:56.847891 kernel: dca service started, version 1.12.1 Jul 15 11:36:56.847898 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 15 11:36:56.847905 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 15 11:36:56.847912 kernel: PCI: Using configuration type 1 for base access Jul 15 11:36:56.847918 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 11:36:56.847926 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 11:36:56.847932 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 11:36:56.847941 kernel: ACPI: Added _OSI(Module Device) Jul 15 11:36:56.847949 kernel: ACPI: Added _OSI(Processor Device) Jul 15 11:36:56.847956 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 11:36:56.847966 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 15 11:36:56.847973 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 15 11:36:56.847979 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 15 11:36:56.847986 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 11:36:56.847993 kernel: ACPI: Interpreter enabled Jul 15 11:36:56.848000 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 11:36:56.848006 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 11:36:56.848015 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 11:36:56.848021 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 11:36:56.848028 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 11:36:56.848144 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 11:36:56.848219 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 11:36:56.848287 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 11:36:56.848296 kernel: PCI host bridge to bus 0000:00 Jul 15 11:36:56.848371 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 11:36:56.848435 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 11:36:56.848497 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 11:36:56.848558 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 11:36:56.848629 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 11:36:56.848691 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 15 11:36:56.848766 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 11:36:56.848897 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 15 11:36:56.848980 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 15 11:36:56.849059 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 15 11:36:56.849131 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 15 11:36:56.849200 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 15 11:36:56.849267 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 15 11:36:56.849340 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 11:36:56.849421 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 15 11:36:56.849495 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 15 11:36:56.849564 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 15 11:36:56.849631 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 15 11:36:56.849706 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 15 11:36:56.849790 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 15 11:36:56.849860 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 15 11:36:56.849937 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 15 11:36:56.850013 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 15 11:36:56.850080 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 15 11:36:56.850147 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 15 11:36:56.850216 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 15 11:36:56.850285 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 15 11:36:56.851104 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 15 11:36:56.851212 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 11:36:56.851318 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 15 11:36:56.851402 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 15 11:36:56.851497 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 15 11:36:56.851585 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 15 11:36:56.851672 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 15 11:36:56.851683 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 11:36:56.851690 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 11:36:56.851697 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 11:36:56.851704 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 11:36:56.851724 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 11:36:56.851731 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 11:36:56.851738 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 11:36:56.851744 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 11:36:56.851766 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 11:36:56.851773 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 11:36:56.851780 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 11:36:56.851787 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 11:36:56.851794 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 11:36:56.851800 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 11:36:56.851807 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 11:36:56.851814 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 11:36:56.851821 kernel: iommu: Default domain type: Translated Jul 15 11:36:56.851829 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 11:36:56.851924 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 11:36:56.852008 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 11:36:56.852090 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 11:36:56.852100 kernel: vgaarb: loaded Jul 15 11:36:56.852107 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 11:36:56.852114 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 11:36:56.852121 kernel: PTP clock support registered Jul 15 11:36:56.852140 kernel: Registered efivars operations Jul 15 11:36:56.852150 kernel: PCI: Using ACPI for IRQ routing Jul 15 11:36:56.852157 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 11:36:56.852163 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 15 11:36:56.852170 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 15 11:36:56.852177 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Jul 15 11:36:56.852183 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Jul 15 11:36:56.852190 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 15 11:36:56.852209 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 15 11:36:56.852220 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 11:36:56.852229 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 11:36:56.852238 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 11:36:56.852260 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 11:36:56.852267 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 11:36:56.852274 kernel: pnp: PnP ACPI init Jul 15 11:36:56.852383 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 11:36:56.852394 kernel: pnp: PnP ACPI: found 6 devices Jul 15 11:36:56.852401 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 11:36:56.852422 kernel: NET: Registered PF_INET protocol family Jul 15 11:36:56.852430 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 11:36:56.852437 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 11:36:56.852444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 11:36:56.852451 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 11:36:56.852465 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 15 11:36:56.852476 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 11:36:56.852483 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:36:56.852492 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:36:56.852499 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 11:36:56.852516 kernel: NET: Registered PF_XDP protocol family Jul 15 11:36:56.852604 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 15 11:36:56.852701 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 15 11:36:56.852791 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 11:36:56.852877 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 11:36:56.852965 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 11:36:56.853056 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 11:36:56.853116 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 11:36:56.853175 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 15 11:36:56.853184 kernel: PCI: CLS 0 bytes, default 64 Jul 15 11:36:56.853191 kernel: Initialise system trusted keyrings Jul 15 11:36:56.853198 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 11:36:56.853205 kernel: Key type asymmetric registered Jul 15 11:36:56.853212 kernel: Asymmetric key parser 'x509' registered Jul 15 11:36:56.853219 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 11:36:56.853228 kernel: io scheduler mq-deadline registered Jul 15 11:36:56.853235 kernel: io scheduler kyber registered Jul 15 11:36:56.853250 kernel: io scheduler bfq registered Jul 15 11:36:56.853258 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 11:36:56.853266 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 11:36:56.853273 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 11:36:56.853281 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 11:36:56.853288 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 11:36:56.853295 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 11:36:56.853304 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 11:36:56.853311 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 11:36:56.853318 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 11:36:56.853325 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 11:36:56.853401 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 11:36:56.853466 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 11:36:56.853529 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T11:36:56 UTC (1752579416) Jul 15 11:36:56.853590 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 11:36:56.853602 kernel: efifb: probing for efifb Jul 15 11:36:56.853610 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 15 11:36:56.853617 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 15 11:36:56.853624 kernel: efifb: scrolling: redraw Jul 15 11:36:56.853632 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 11:36:56.853639 kernel: Console: switching to colour frame buffer device 160x50 Jul 15 11:36:56.853646 kernel: fb0: EFI VGA frame buffer device Jul 15 11:36:56.853653 kernel: pstore: Registered efi as persistent store backend Jul 15 11:36:56.853660 kernel: NET: Registered PF_INET6 protocol family Jul 15 11:36:56.853668 kernel: Segment Routing with IPv6 Jul 15 11:36:56.853676 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 11:36:56.853683 kernel: NET: Registered PF_PACKET protocol family Jul 15 11:36:56.853691 kernel: Key type dns_resolver registered Jul 15 11:36:56.853698 kernel: IPI shorthand broadcast: enabled Jul 15 11:36:56.853705 kernel: sched_clock: Marking stable (450536700, 122705517)->(588138670, -14896453) Jul 15 11:36:56.853725 kernel: registered taskstats version 1 Jul 15 11:36:56.853732 kernel: Loading compiled-in X.509 certificates Jul 15 11:36:56.853740 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: c4b3a19d3bd6de5654dc12075428550cf6251289' Jul 15 11:36:56.853747 kernel: Key type .fscrypt registered Jul 15 11:36:56.853754 kernel: Key type fscrypt-provisioning registered Jul 15 11:36:56.853761 kernel: pstore: Using crash dump compression: deflate Jul 15 11:36:56.853768 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 11:36:56.853776 kernel: ima: Allocated hash algorithm: sha1 Jul 15 11:36:56.853785 kernel: ima: No architecture policies found Jul 15 11:36:56.853792 kernel: clk: Disabling unused clocks Jul 15 11:36:56.853799 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 15 11:36:56.853808 kernel: Write protecting the kernel read-only data: 28672k Jul 15 11:36:56.853815 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 15 11:36:56.853822 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 15 11:36:56.853830 kernel: Run /init as init process Jul 15 11:36:56.853837 kernel: with arguments: Jul 15 11:36:56.853844 kernel: /init Jul 15 11:36:56.853851 kernel: with environment: Jul 15 11:36:56.853859 kernel: HOME=/ Jul 15 11:36:56.853866 kernel: TERM=linux Jul 15 11:36:56.853873 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 11:36:56.853882 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:36:56.853899 systemd[1]: Detected virtualization kvm. Jul 15 11:36:56.853907 systemd[1]: Detected architecture x86-64. Jul 15 11:36:56.853915 systemd[1]: Running in initrd. Jul 15 11:36:56.853923 systemd[1]: No hostname configured, using default hostname. Jul 15 11:36:56.853931 systemd[1]: Hostname set to . Jul 15 11:36:56.853939 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:36:56.853946 systemd[1]: Queued start job for default target initrd.target. Jul 15 11:36:56.853956 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:36:56.853964 systemd[1]: Reached target cryptsetup.target. Jul 15 11:36:56.853973 systemd[1]: Reached target paths.target. Jul 15 11:36:56.853982 systemd[1]: Reached target slices.target. Jul 15 11:36:56.853989 systemd[1]: Reached target swap.target. Jul 15 11:36:56.853998 systemd[1]: Reached target timers.target. Jul 15 11:36:56.854007 systemd[1]: Listening on iscsid.socket. Jul 15 11:36:56.854014 systemd[1]: Listening on iscsiuio.socket. Jul 15 11:36:56.854022 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:36:56.854030 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:36:56.854037 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:36:56.854045 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:36:56.854053 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:36:56.854061 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:36:56.854068 systemd[1]: Reached target sockets.target. Jul 15 11:36:56.854076 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:36:56.854083 systemd[1]: Finished network-cleanup.service. Jul 15 11:36:56.854091 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 11:36:56.854099 systemd[1]: Starting systemd-journald.service... Jul 15 11:36:56.854106 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:36:56.854114 systemd[1]: Starting systemd-resolved.service... Jul 15 11:36:56.854123 systemd[1]: Starting systemd-vconsole-setup.service... Jul 15 11:36:56.854130 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:36:56.854138 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 11:36:56.854145 systemd[1]: Finished systemd-vconsole-setup.service. Jul 15 11:36:56.854153 kernel: audit: type=1130 audit(1752579416.845:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.854161 systemd[1]: Starting dracut-cmdline-ask.service... Jul 15 11:36:56.854171 systemd-journald[197]: Journal started Jul 15 11:36:56.854208 systemd-journald[197]: Runtime Journal (/run/log/journal/3cb734dde5ed45c5ad2d183f8edecbd5) is 6.0M, max 48.4M, 42.4M free. Jul 15 11:36:56.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.845168 systemd-modules-load[198]: Inserted module 'overlay' Jul 15 11:36:56.861794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:36:56.860225 systemd-resolved[199]: Positive Trust Anchors: Jul 15 11:36:56.872486 systemd[1]: Started systemd-journald.service. Jul 15 11:36:56.872500 kernel: audit: type=1130 audit(1752579416.863:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.872511 kernel: audit: type=1130 audit(1752579416.867:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.860233 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:36:56.874082 kernel: audit: type=1130 audit(1752579416.867:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.860260 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:36:56.884209 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 11:36:56.865546 systemd-resolved[199]: Defaulting to hostname 'linux'. Jul 15 11:36:56.889360 kernel: audit: type=1130 audit(1752579416.883:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.866325 systemd[1]: Started systemd-resolved.service. Jul 15 11:36:56.891057 kernel: Bridge firewalling registered Jul 15 11:36:56.868109 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:36:56.868318 systemd[1]: Reached target nss-lookup.target. Jul 15 11:36:56.884305 systemd[1]: Finished dracut-cmdline-ask.service. Jul 15 11:36:56.887670 systemd[1]: Starting dracut-cmdline.service... Jul 15 11:36:56.890146 systemd-modules-load[198]: Inserted module 'br_netfilter' Jul 15 11:36:56.896682 dracut-cmdline[214]: dracut-dracut-053 Jul 15 11:36:56.897681 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:36:56.908746 kernel: SCSI subsystem initialized Jul 15 11:36:56.920622 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 11:36:56.920667 kernel: device-mapper: uevent: version 1.0.3 Jul 15 11:36:56.920678 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 15 11:36:56.923320 systemd-modules-load[198]: Inserted module 'dm_multipath' Jul 15 11:36:56.924041 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:36:56.929217 kernel: audit: type=1130 audit(1752579416.924:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.925685 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:36:56.933941 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:36:56.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.938736 kernel: audit: type=1130 audit(1752579416.934:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:56.954736 kernel: Loading iSCSI transport class v2.0-870. Jul 15 11:36:56.970738 kernel: iscsi: registered transport (tcp) Jul 15 11:36:56.991806 kernel: iscsi: registered transport (qla4xxx) Jul 15 11:36:56.991833 kernel: QLogic iSCSI HBA Driver Jul 15 11:36:57.020829 systemd[1]: Finished dracut-cmdline.service. Jul 15 11:36:57.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:57.023770 systemd[1]: Starting dracut-pre-udev.service... Jul 15 11:36:57.027133 kernel: audit: type=1130 audit(1752579417.021:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:57.069747 kernel: raid6: avx2x4 gen() 30692 MB/s Jul 15 11:36:57.086738 kernel: raid6: avx2x4 xor() 8471 MB/s Jul 15 11:36:57.103736 kernel: raid6: avx2x2 gen() 32462 MB/s Jul 15 11:36:57.120736 kernel: raid6: avx2x2 xor() 19235 MB/s Jul 15 11:36:57.137734 kernel: raid6: avx2x1 gen() 26416 MB/s Jul 15 11:36:57.154733 kernel: raid6: avx2x1 xor() 15385 MB/s Jul 15 11:36:57.171734 kernel: raid6: sse2x4 gen() 14793 MB/s Jul 15 11:36:57.188737 kernel: raid6: sse2x4 xor() 7557 MB/s Jul 15 11:36:57.205736 kernel: raid6: sse2x2 gen() 16469 MB/s Jul 15 11:36:57.222737 kernel: raid6: sse2x2 xor() 9838 MB/s Jul 15 11:36:57.239740 kernel: raid6: sse2x1 gen() 12194 MB/s Jul 15 11:36:57.257082 kernel: raid6: sse2x1 xor() 7789 MB/s Jul 15 11:36:57.257105 kernel: raid6: using algorithm avx2x2 gen() 32462 MB/s Jul 15 11:36:57.257114 kernel: raid6: .... xor() 19235 MB/s, rmw enabled Jul 15 11:36:57.257767 kernel: raid6: using avx2x2 recovery algorithm Jul 15 11:36:57.269739 kernel: xor: automatically using best checksumming function avx Jul 15 11:36:57.362759 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 15 11:36:57.371191 systemd[1]: Finished dracut-pre-udev.service. Jul 15 11:36:57.375488 kernel: audit: type=1130 audit(1752579417.370:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:57.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:57.375000 audit: BPF prog-id=7 op=LOAD Jul 15 11:36:57.375000 audit: BPF prog-id=8 op=LOAD Jul 15 11:36:57.375830 systemd[1]: Starting systemd-udevd.service... Jul 15 11:36:57.387507 systemd-udevd[401]: Using default interface naming scheme 'v252'. Jul 15 11:36:57.391214 systemd[1]: Started systemd-udevd.service. Jul 15 11:36:57.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:57.393110 systemd[1]: Starting dracut-pre-trigger.service... Jul 15 11:36:57.402985 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jul 15 11:36:57.427176 systemd[1]: Finished dracut-pre-trigger.service. Jul 15 11:36:57.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:57.429365 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:36:57.460747 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:36:57.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:57.491048 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 11:36:57.496199 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 11:36:57.496211 kernel: GPT:9289727 != 19775487 Jul 15 11:36:57.496219 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 11:36:57.496229 kernel: GPT:9289727 != 19775487 Jul 15 11:36:57.496237 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 11:36:57.496245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:36:57.503740 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 11:36:57.507761 kernel: libata version 3.00 loaded. Jul 15 11:36:57.516182 kernel: AVX2 version of gcm_enc/dec engaged. Jul 15 11:36:57.516204 kernel: AES CTR mode by8 optimization enabled Jul 15 11:36:57.522734 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 11:36:57.540933 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 11:36:57.540948 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 15 11:36:57.541040 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 11:36:57.541118 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Jul 15 11:36:57.541127 kernel: scsi host0: ahci Jul 15 11:36:57.541221 kernel: scsi host1: ahci Jul 15 11:36:57.541308 kernel: scsi host2: ahci Jul 15 11:36:57.541389 kernel: scsi host3: ahci Jul 15 11:36:57.541469 kernel: scsi host4: ahci Jul 15 11:36:57.541553 kernel: scsi host5: ahci Jul 15 11:36:57.541637 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 15 11:36:57.541649 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 15 11:36:57.541658 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 15 11:36:57.541666 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 15 11:36:57.541675 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 15 11:36:57.541684 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 15 11:36:57.539872 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 15 11:36:57.544788 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 15 11:36:57.549835 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:36:57.552384 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 15 11:36:57.552625 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 15 11:36:57.556548 systemd[1]: Starting disk-uuid.service... Jul 15 11:36:57.563191 disk-uuid[532]: Primary Header is updated. Jul 15 11:36:57.563191 disk-uuid[532]: Secondary Entries is updated. Jul 15 11:36:57.563191 disk-uuid[532]: Secondary Header is updated. Jul 15 11:36:57.567750 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:36:57.570738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:36:57.853047 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 11:36:57.853127 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 11:36:57.853142 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 11:36:57.857066 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 11:36:57.857087 kernel: ata3.00: applying bridge limits Jul 15 11:36:57.857097 kernel: ata3.00: configured for UDMA/100 Jul 15 11:36:57.857106 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 11:36:57.857743 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 11:36:57.858743 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 11:36:57.859747 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 11:36:57.892752 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 11:36:57.909301 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 11:36:57.909316 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 11:36:58.649628 disk-uuid[535]: The operation has completed successfully. Jul 15 11:36:58.650902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:36:58.671868 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 11:36:58.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.671949 systemd[1]: Finished disk-uuid.service. Jul 15 11:36:58.676399 systemd[1]: Starting verity-setup.service... Jul 15 11:36:58.687745 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 15 11:36:58.705359 systemd[1]: Found device dev-mapper-usr.device. Jul 15 11:36:58.707052 systemd[1]: Mounting sysusr-usr.mount... Jul 15 11:36:58.708851 systemd[1]: Finished verity-setup.service. Jul 15 11:36:58.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.764754 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 15 11:36:58.764975 systemd[1]: Mounted sysusr-usr.mount. Jul 15 11:36:58.765793 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 15 11:36:58.766344 systemd[1]: Starting ignition-setup.service... Jul 15 11:36:58.767501 systemd[1]: Starting parse-ip-for-networkd.service... Jul 15 11:36:58.774921 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:36:58.774948 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:36:58.774957 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:36:58.782314 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 15 11:36:58.812799 systemd[1]: Finished ignition-setup.service. Jul 15 11:36:58.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.815056 systemd[1]: Starting ignition-fetch-offline.service... Jul 15 11:36:58.834973 systemd[1]: Finished parse-ip-for-networkd.service. Jul 15 11:36:58.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.837000 audit: BPF prog-id=9 op=LOAD Jul 15 11:36:58.838386 systemd[1]: Starting systemd-networkd.service... Jul 15 11:36:58.850167 ignition[690]: Ignition 2.14.0 Jul 15 11:36:58.850175 ignition[690]: Stage: fetch-offline Jul 15 11:36:58.850227 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:36:58.850235 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:36:58.850319 ignition[690]: parsed url from cmdline: "" Jul 15 11:36:58.850321 ignition[690]: no config URL provided Jul 15 11:36:58.850326 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 11:36:58.850331 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jul 15 11:36:58.850346 ignition[690]: op(1): [started] loading QEMU firmware config module Jul 15 11:36:58.850350 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 11:36:58.854785 ignition[690]: op(1): [finished] loading QEMU firmware config module Jul 15 11:36:58.859490 systemd-networkd[725]: lo: Link UP Jul 15 11:36:58.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.859498 systemd-networkd[725]: lo: Gained carrier Jul 15 11:36:58.859880 systemd-networkd[725]: Enumeration completed Jul 15 11:36:58.859941 systemd[1]: Started systemd-networkd.service. Jul 15 11:36:58.860807 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:36:58.861421 systemd[1]: Reached target network.target. Jul 15 11:36:58.861636 systemd-networkd[725]: eth0: Link UP Jul 15 11:36:58.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.861639 systemd-networkd[725]: eth0: Gained carrier Jul 15 11:36:58.862879 systemd[1]: Starting iscsiuio.service... Jul 15 11:36:58.868096 systemd[1]: Started iscsiuio.service. Jul 15 11:36:58.869529 systemd[1]: Starting iscsid.service... Jul 15 11:36:58.873333 iscsid[732]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:36:58.873333 iscsid[732]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 15 11:36:58.873333 iscsid[732]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 15 11:36:58.873333 iscsid[732]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 15 11:36:58.873333 iscsid[732]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:36:58.873333 iscsid[732]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 15 11:36:58.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.873416 systemd[1]: Started iscsid.service. Jul 15 11:36:58.874674 systemd[1]: Starting dracut-initqueue.service... Jul 15 11:36:58.883241 systemd[1]: Finished dracut-initqueue.service. Jul 15 11:36:58.884123 systemd[1]: Reached target remote-fs-pre.target. Jul 15 11:36:58.886180 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:36:58.888295 systemd[1]: Reached target remote-fs.target. Jul 15 11:36:58.889607 systemd[1]: Starting dracut-pre-mount.service... Jul 15 11:36:58.896054 systemd[1]: Finished dracut-pre-mount.service. Jul 15 11:36:58.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.919164 ignition[690]: parsing config with SHA512: c6703b0446ad19f5f4d1a6f0a09e0aeb8ffd47d828b2df4d3815188cc6fb54b271201f0a7833441a1ef88e454fa885a94ad0aa5014f2e965b4d48495535c4d17 Jul 15 11:36:58.925186 unknown[690]: fetched base config from "system" Jul 15 11:36:58.925200 unknown[690]: fetched user config from "qemu" Jul 15 11:36:58.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.925623 ignition[690]: fetch-offline: fetch-offline passed Jul 15 11:36:58.926645 systemd[1]: Finished ignition-fetch-offline.service. Jul 15 11:36:58.925667 ignition[690]: Ignition finished successfully Jul 15 11:36:58.926765 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:36:58.927541 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 11:36:58.928090 systemd[1]: Starting ignition-kargs.service... Jul 15 11:36:58.936298 ignition[746]: Ignition 2.14.0 Jul 15 11:36:58.936307 ignition[746]: Stage: kargs Jul 15 11:36:58.936383 ignition[746]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:36:58.938373 systemd[1]: Finished ignition-kargs.service. Jul 15 11:36:58.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.936392 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:36:58.940314 systemd[1]: Starting ignition-disks.service... Jul 15 11:36:58.937460 ignition[746]: kargs: kargs passed Jul 15 11:36:58.937490 ignition[746]: Ignition finished successfully Jul 15 11:36:58.946621 ignition[752]: Ignition 2.14.0 Jul 15 11:36:58.946629 ignition[752]: Stage: disks Jul 15 11:36:58.946707 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:36:58.946728 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:36:58.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.948237 systemd[1]: Finished ignition-disks.service. Jul 15 11:36:58.947651 ignition[752]: disks: disks passed Jul 15 11:36:58.949889 systemd[1]: Reached target initrd-root-device.target. Jul 15 11:36:58.947681 ignition[752]: Ignition finished successfully Jul 15 11:36:58.951848 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:36:58.952757 systemd[1]: Reached target local-fs.target. Jul 15 11:36:58.954322 systemd[1]: Reached target sysinit.target. Jul 15 11:36:58.955202 systemd[1]: Reached target basic.target. Jul 15 11:36:58.957413 systemd[1]: Starting systemd-fsck-root.service... Jul 15 11:36:58.966645 systemd-fsck[760]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 15 11:36:58.971238 systemd[1]: Finished systemd-fsck-root.service. Jul 15 11:36:58.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:58.972811 systemd[1]: Mounting sysroot.mount... Jul 15 11:36:58.978489 systemd[1]: Mounted sysroot.mount. Jul 15 11:36:58.980513 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 15 11:36:58.979204 systemd[1]: Reached target initrd-root-fs.target. Jul 15 11:36:58.981442 systemd[1]: Mounting sysroot-usr.mount... Jul 15 11:36:58.982328 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 15 11:36:58.982354 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 11:36:58.982371 systemd[1]: Reached target ignition-diskful.target. Jul 15 11:36:58.984139 systemd[1]: Mounted sysroot-usr.mount. Jul 15 11:36:58.986046 systemd[1]: Starting initrd-setup-root.service... Jul 15 11:36:58.990996 initrd-setup-root[770]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 11:36:58.993602 initrd-setup-root[778]: cut: /sysroot/etc/group: No such file or directory Jul 15 11:36:58.997116 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 11:36:58.999608 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 11:36:59.021858 systemd[1]: Finished initrd-setup-root.service. Jul 15 11:36:59.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:59.023558 systemd[1]: Starting ignition-mount.service... Jul 15 11:36:59.024613 systemd[1]: Starting sysroot-boot.service... Jul 15 11:36:59.030914 bash[811]: umount: /sysroot/usr/share/oem: not mounted. Jul 15 11:36:59.038508 ignition[813]: INFO : Ignition 2.14.0 Jul 15 11:36:59.038508 ignition[813]: INFO : Stage: mount Jul 15 11:36:59.041367 ignition[813]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:36:59.041367 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:36:59.041367 ignition[813]: INFO : mount: mount passed Jul 15 11:36:59.041367 ignition[813]: INFO : Ignition finished successfully Jul 15 11:36:59.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:59.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:36:59.040018 systemd[1]: Finished ignition-mount.service. Jul 15 11:36:59.042197 systemd[1]: Finished sysroot-boot.service. Jul 15 11:36:59.715467 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 15 11:36:59.723392 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (822) Jul 15 11:36:59.723422 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:36:59.723437 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:36:59.724173 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:36:59.727546 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 15 11:36:59.729059 systemd[1]: Starting ignition-files.service... Jul 15 11:36:59.742128 ignition[842]: INFO : Ignition 2.14.0 Jul 15 11:36:59.742128 ignition[842]: INFO : Stage: files Jul 15 11:36:59.743696 ignition[842]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:36:59.743696 ignition[842]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:36:59.743696 ignition[842]: DEBUG : files: compiled without relabeling support, skipping Jul 15 11:36:59.747227 ignition[842]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 11:36:59.747227 ignition[842]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 11:36:59.750809 ignition[842]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 11:36:59.752220 ignition[842]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 11:36:59.753680 unknown[842]: wrote ssh authorized keys file for user: core Jul 15 11:36:59.754664 ignition[842]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 11:36:59.756066 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 15 11:36:59.756066 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 15 11:36:59.756066 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 11:36:59.756066 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 11:36:59.801700 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 11:36:59.885944 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 11:36:59.885944 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 11:36:59.889888 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 11:36:59.889888 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:36:59.893093 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:36:59.894685 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:36:59.896362 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:36:59.897983 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:36:59.899665 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:36:59.901347 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:36:59.903002 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:36:59.904622 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:36:59.906937 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:36:59.909226 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:36:59.911191 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 11:37:00.613902 systemd-networkd[725]: eth0: Gained IPv6LL Jul 15 11:37:00.686167 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 11:37:01.045576 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:37:01.045576 ignition[842]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 15 11:37:01.049153 ignition[842]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 15 11:37:01.049153 ignition[842]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 15 11:37:01.049153 ignition[842]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 15 11:37:01.049153 ignition[842]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 15 11:37:01.049153 ignition[842]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:37:01.049153 ignition[842]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:37:01.059124 ignition[842]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 15 11:37:01.059124 ignition[842]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 15 11:37:01.059124 ignition[842]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:37:01.059124 ignition[842]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:37:01.059124 ignition[842]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 15 11:37:01.059124 ignition[842]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 11:37:01.059124 ignition[842]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:37:01.086506 ignition[842]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:37:01.088100 ignition[842]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 11:37:01.088100 ignition[842]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 15 11:37:01.090791 ignition[842]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 11:37:01.092187 ignition[842]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:37:01.093858 ignition[842]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:37:01.095463 ignition[842]: INFO : files: files passed Jul 15 11:37:01.096182 ignition[842]: INFO : Ignition finished successfully Jul 15 11:37:01.097935 systemd[1]: Finished ignition-files.service. Jul 15 11:37:01.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.099495 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 15 11:37:01.099771 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 15 11:37:01.100362 systemd[1]: Starting ignition-quench.service... Jul 15 11:37:01.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.103080 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 11:37:01.103144 systemd[1]: Finished ignition-quench.service. Jul 15 11:37:01.110416 initrd-setup-root-after-ignition[867]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 15 11:37:01.111900 initrd-setup-root-after-ignition[869]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 11:37:01.112329 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 15 11:37:01.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.113985 systemd[1]: Reached target ignition-complete.target. Jul 15 11:37:01.116697 systemd[1]: Starting initrd-parse-etc.service... Jul 15 11:37:01.129557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 11:37:01.129632 systemd[1]: Finished initrd-parse-etc.service. Jul 15 11:37:01.131334 systemd[1]: Reached target initrd-fs.target. Jul 15 11:37:01.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.131963 systemd[1]: Reached target initrd.target. Jul 15 11:37:01.133677 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 15 11:37:01.134266 systemd[1]: Starting dracut-pre-pivot.service... Jul 15 11:37:01.143141 systemd[1]: Finished dracut-pre-pivot.service. Jul 15 11:37:01.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.143967 systemd[1]: Starting initrd-cleanup.service... Jul 15 11:37:01.152010 systemd[1]: Stopped target nss-lookup.target. Jul 15 11:37:01.152313 systemd[1]: Stopped target remote-cryptsetup.target. Jul 15 11:37:01.153755 systemd[1]: Stopped target timers.target. Jul 15 11:37:01.155267 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 11:37:01.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.155350 systemd[1]: Stopped dracut-pre-pivot.service. Jul 15 11:37:01.156664 systemd[1]: Stopped target initrd.target. Jul 15 11:37:01.158346 systemd[1]: Stopped target basic.target. Jul 15 11:37:01.159679 systemd[1]: Stopped target ignition-complete.target. Jul 15 11:37:01.161129 systemd[1]: Stopped target ignition-diskful.target. Jul 15 11:37:01.162450 systemd[1]: Stopped target initrd-root-device.target. Jul 15 11:37:01.164124 systemd[1]: Stopped target remote-fs.target. Jul 15 11:37:01.165488 systemd[1]: Stopped target remote-fs-pre.target. Jul 15 11:37:01.167111 systemd[1]: Stopped target sysinit.target. Jul 15 11:37:01.168352 systemd[1]: Stopped target local-fs.target. Jul 15 11:37:01.169745 systemd[1]: Stopped target local-fs-pre.target. Jul 15 11:37:01.171078 systemd[1]: Stopped target swap.target. Jul 15 11:37:01.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.171363 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 11:37:01.179038 kernel: kauditd_printk_skb: 31 callbacks suppressed Jul 15 11:37:01.179057 kernel: audit: type=1131 audit(1752579421.172:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.171444 systemd[1]: Stopped dracut-pre-mount.service. Jul 15 11:37:01.173805 systemd[1]: Stopped target cryptsetup.target. Jul 15 11:37:01.187462 kernel: audit: type=1131 audit(1752579421.179:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.187475 kernel: audit: type=1131 audit(1752579421.181:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.178835 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 11:37:01.178916 systemd[1]: Stopped dracut-initqueue.service. Jul 15 11:37:01.180498 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 11:37:01.180579 systemd[1]: Stopped ignition-fetch-offline.service. Jul 15 11:37:01.181949 systemd[1]: Stopped target paths.target. Jul 15 11:37:01.188635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 11:37:01.193571 systemd[1]: Stopped systemd-ask-password-console.path. Jul 15 11:37:01.195235 systemd[1]: Stopped target slices.target. Jul 15 11:37:01.195522 systemd[1]: Stopped target sockets.target. Jul 15 11:37:01.197009 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 11:37:01.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.197095 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 15 11:37:01.204176 kernel: audit: type=1131 audit(1752579421.197:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.204194 kernel: audit: type=1131 audit(1752579421.202:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.198351 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 11:37:01.198429 systemd[1]: Stopped ignition-files.service. Jul 15 11:37:01.208981 iscsid[732]: iscsid shutting down. Jul 15 11:37:01.203652 systemd[1]: Stopping ignition-mount.service... Jul 15 11:37:01.209112 systemd[1]: Stopping iscsid.service... Jul 15 11:37:01.211661 ignition[882]: INFO : Ignition 2.14.0 Jul 15 11:37:01.211661 ignition[882]: INFO : Stage: umount Jul 15 11:37:01.211661 ignition[882]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:37:01.211661 ignition[882]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:37:01.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.211008 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 11:37:01.219578 kernel: audit: type=1131 audit(1752579421.214:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.219591 ignition[882]: INFO : umount: umount passed Jul 15 11:37:01.219591 ignition[882]: INFO : Ignition finished successfully Jul 15 11:37:01.211728 systemd[1]: Stopped kmod-static-nodes.service. Jul 15 11:37:01.222395 systemd[1]: Stopping sysroot-boot.service... Jul 15 11:37:01.223787 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 11:37:01.224835 systemd[1]: Stopped systemd-udev-trigger.service. Jul 15 11:37:01.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.226506 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 11:37:01.230432 kernel: audit: type=1131 audit(1752579421.225:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.226589 systemd[1]: Stopped dracut-pre-trigger.service. Jul 15 11:37:01.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.233449 systemd[1]: iscsid.service: Deactivated successfully. Jul 15 11:37:01.235837 kernel: audit: type=1131 audit(1752579421.231:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.233529 systemd[1]: Stopped iscsid.service. Jul 15 11:37:01.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.238121 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 11:37:01.240830 kernel: audit: type=1131 audit(1752579421.236:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.238530 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 11:37:01.238599 systemd[1]: Stopped ignition-mount.service. Jul 15 11:37:01.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.243527 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 11:37:01.246932 kernel: audit: type=1131 audit(1752579421.242:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.243594 systemd[1]: Closed iscsid.socket. Jul 15 11:37:01.248201 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 11:37:01.248237 systemd[1]: Stopped ignition-disks.service. Jul 15 11:37:01.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.250637 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 11:37:01.250667 systemd[1]: Stopped ignition-kargs.service. Jul 15 11:37:01.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.253056 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 11:37:01.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.253086 systemd[1]: Stopped ignition-setup.service. Jul 15 11:37:01.254804 systemd[1]: Stopping iscsiuio.service... Jul 15 11:37:01.256953 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 11:37:01.257891 systemd[1]: Finished initrd-cleanup.service. Jul 15 11:37:01.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.259604 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 15 11:37:01.260553 systemd[1]: Stopped iscsiuio.service. Jul 15 11:37:01.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.262642 systemd[1]: Stopped target network.target. Jul 15 11:37:01.264108 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 11:37:01.264138 systemd[1]: Closed iscsiuio.socket. Jul 15 11:37:01.266254 systemd[1]: Stopping systemd-networkd.service... Jul 15 11:37:01.267882 systemd[1]: Stopping systemd-resolved.service... Jul 15 11:37:01.269767 systemd-networkd[725]: eth0: DHCPv6 lease lost Jul 15 11:37:01.270770 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 11:37:01.271703 systemd[1]: Stopped systemd-networkd.service. Jul 15 11:37:01.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.273754 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 11:37:01.273793 systemd[1]: Closed systemd-networkd.socket. Jul 15 11:37:01.275000 audit: BPF prog-id=9 op=UNLOAD Jul 15 11:37:01.276747 systemd[1]: Stopping network-cleanup.service... Jul 15 11:37:01.278316 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 11:37:01.278360 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 15 11:37:01.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.281042 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:37:01.281078 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:37:01.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.283472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 11:37:01.283503 systemd[1]: Stopped systemd-modules-load.service. Jul 15 11:37:01.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.286184 systemd[1]: Stopping systemd-udevd.service... Jul 15 11:37:01.288295 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:37:01.289828 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 11:37:01.289907 systemd[1]: Stopped systemd-resolved.service. Jul 15 11:37:01.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.294698 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 11:37:01.295658 systemd[1]: Stopped network-cleanup.service. Jul 15 11:37:01.295000 audit: BPF prog-id=6 op=UNLOAD Jul 15 11:37:01.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.297499 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 11:37:01.298463 systemd[1]: Stopped systemd-udevd.service. Jul 15 11:37:01.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.300371 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 11:37:01.301273 systemd[1]: Stopped sysroot-boot.service. Jul 15 11:37:01.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.302812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 11:37:01.302846 systemd[1]: Closed systemd-udevd-control.socket. Jul 15 11:37:01.305344 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 11:37:01.305377 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 15 11:37:01.307738 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 11:37:01.307781 systemd[1]: Stopped dracut-pre-udev.service. Jul 15 11:37:01.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.310107 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 11:37:01.310139 systemd[1]: Stopped dracut-cmdline.service. Jul 15 11:37:01.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.312411 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 11:37:01.312441 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 15 11:37:01.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.314824 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 11:37:01.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.314853 systemd[1]: Stopped initrd-setup-root.service. Jul 15 11:37:01.317797 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 15 11:37:01.319430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 11:37:01.319469 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 15 11:37:01.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.322440 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 11:37:01.323523 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 15 11:37:01.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:01.325292 systemd[1]: Reached target initrd-switch-root.target. Jul 15 11:37:01.327594 systemd[1]: Starting initrd-switch-root.service... Jul 15 11:37:01.332323 systemd[1]: Switching root. Jul 15 11:37:01.335000 audit: BPF prog-id=8 op=UNLOAD Jul 15 11:37:01.335000 audit: BPF prog-id=7 op=UNLOAD Jul 15 11:37:01.335000 audit: BPF prog-id=5 op=UNLOAD Jul 15 11:37:01.335000 audit: BPF prog-id=4 op=UNLOAD Jul 15 11:37:01.335000 audit: BPF prog-id=3 op=UNLOAD Jul 15 11:37:01.350401 systemd-journald[197]: Journal stopped Jul 15 11:37:03.841516 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Jul 15 11:37:03.841563 kernel: SELinux: Class mctp_socket not defined in policy. Jul 15 11:37:03.841579 kernel: SELinux: Class anon_inode not defined in policy. Jul 15 11:37:03.841589 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 15 11:37:03.841601 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 11:37:03.841612 kernel: SELinux: policy capability open_perms=1 Jul 15 11:37:03.841622 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 11:37:03.841634 kernel: SELinux: policy capability always_check_network=0 Jul 15 11:37:03.841643 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 11:37:03.841653 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 11:37:03.841663 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 11:37:03.841672 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 11:37:03.841694 systemd[1]: Successfully loaded SELinux policy in 38.573ms. Jul 15 11:37:03.841724 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.320ms. Jul 15 11:37:03.841737 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:37:03.841748 systemd[1]: Detected virtualization kvm. Jul 15 11:37:03.841758 systemd[1]: Detected architecture x86-64. Jul 15 11:37:03.841770 systemd[1]: Detected first boot. Jul 15 11:37:03.841780 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:37:03.841790 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 15 11:37:03.841800 systemd[1]: Populated /etc with preset unit settings. Jul 15 11:37:03.841811 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:37:03.841825 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:37:03.841836 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:37:03.841848 systemd[1]: Queued start job for default target multi-user.target. Jul 15 11:37:03.841859 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 15 11:37:03.841869 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 15 11:37:03.841880 systemd[1]: Created slice system-addon\x2drun.slice. Jul 15 11:37:03.841890 systemd[1]: Created slice system-getty.slice. Jul 15 11:37:03.841901 systemd[1]: Created slice system-modprobe.slice. Jul 15 11:37:03.841912 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 15 11:37:03.841923 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 15 11:37:03.841933 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 15 11:37:03.841943 systemd[1]: Created slice user.slice. Jul 15 11:37:03.841954 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:37:03.841965 systemd[1]: Started systemd-ask-password-wall.path. Jul 15 11:37:03.841975 systemd[1]: Set up automount boot.automount. Jul 15 11:37:03.841985 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 15 11:37:03.841995 systemd[1]: Reached target integritysetup.target. Jul 15 11:37:03.842005 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:37:03.842015 systemd[1]: Reached target remote-fs.target. Jul 15 11:37:03.842025 systemd[1]: Reached target slices.target. Jul 15 11:37:03.842037 systemd[1]: Reached target swap.target. Jul 15 11:37:03.842048 systemd[1]: Reached target torcx.target. Jul 15 11:37:03.842058 systemd[1]: Reached target veritysetup.target. Jul 15 11:37:03.842069 systemd[1]: Listening on systemd-coredump.socket. Jul 15 11:37:03.842079 systemd[1]: Listening on systemd-initctl.socket. Jul 15 11:37:03.842090 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:37:03.842100 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:37:03.842110 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:37:03.842120 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:37:03.842131 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:37:03.842143 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:37:03.842154 systemd[1]: Listening on systemd-userdbd.socket. Jul 15 11:37:03.842164 systemd[1]: Mounting dev-hugepages.mount... Jul 15 11:37:03.842174 systemd[1]: Mounting dev-mqueue.mount... Jul 15 11:37:03.842184 systemd[1]: Mounting media.mount... Jul 15 11:37:03.842194 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:03.842204 systemd[1]: Mounting sys-kernel-debug.mount... Jul 15 11:37:03.842214 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 15 11:37:03.842226 systemd[1]: Mounting tmp.mount... Jul 15 11:37:03.842238 systemd[1]: Starting flatcar-tmpfiles.service... Jul 15 11:37:03.842248 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:37:03.842258 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:37:03.842268 systemd[1]: Starting modprobe@configfs.service... Jul 15 11:37:03.842278 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:37:03.842288 systemd[1]: Starting modprobe@drm.service... Jul 15 11:37:03.842298 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:37:03.842308 systemd[1]: Starting modprobe@fuse.service... Jul 15 11:37:03.842317 systemd[1]: Starting modprobe@loop.service... Jul 15 11:37:03.842330 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 11:37:03.842340 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 15 11:37:03.842351 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 15 11:37:03.842362 systemd[1]: Starting systemd-journald.service... Jul 15 11:37:03.842371 kernel: fuse: init (API version 7.34) Jul 15 11:37:03.842381 kernel: loop: module loaded Jul 15 11:37:03.842391 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:37:03.842402 systemd[1]: Starting systemd-network-generator.service... Jul 15 11:37:03.842412 systemd[1]: Starting systemd-remount-fs.service... Jul 15 11:37:03.842423 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:37:03.842434 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:03.842444 systemd[1]: Mounted dev-hugepages.mount. Jul 15 11:37:03.842454 systemd[1]: Mounted dev-mqueue.mount. Jul 15 11:37:03.842464 systemd[1]: Mounted media.mount. Jul 15 11:37:03.842474 systemd[1]: Mounted sys-kernel-debug.mount. Jul 15 11:37:03.842484 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 15 11:37:03.842497 systemd-journald[1023]: Journal started Jul 15 11:37:03.842537 systemd-journald[1023]: Runtime Journal (/run/log/journal/3cb734dde5ed45c5ad2d183f8edecbd5) is 6.0M, max 48.4M, 42.4M free. Jul 15 11:37:03.764000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:37:03.764000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 15 11:37:03.840000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 15 11:37:03.840000 audit[1023]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffda2a6e080 a2=4000 a3=7ffda2a6e11c items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:03.840000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 15 11:37:03.844902 systemd[1]: Started systemd-journald.service. Jul 15 11:37:03.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.846195 systemd[1]: Mounted tmp.mount. Jul 15 11:37:03.847487 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:37:03.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.848626 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 11:37:03.848834 systemd[1]: Finished modprobe@configfs.service. Jul 15 11:37:03.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.850111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:37:03.850408 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:37:03.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.851648 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:37:03.851930 systemd[1]: Finished modprobe@drm.service. Jul 15 11:37:03.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.853066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:37:03.853397 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:37:03.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.854641 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 11:37:03.854885 systemd[1]: Finished modprobe@fuse.service. Jul 15 11:37:03.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.856235 systemd[1]: Finished flatcar-tmpfiles.service. Jul 15 11:37:03.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.857359 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:37:03.857657 systemd[1]: Finished modprobe@loop.service. Jul 15 11:37:03.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.859102 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:37:03.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.860433 systemd[1]: Finished systemd-network-generator.service. Jul 15 11:37:03.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.862190 systemd[1]: Finished systemd-remount-fs.service. Jul 15 11:37:03.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.863365 systemd[1]: Reached target network-pre.target. Jul 15 11:37:03.865346 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 15 11:37:03.867029 systemd[1]: Mounting sys-kernel-config.mount... Jul 15 11:37:03.867809 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 11:37:03.869608 systemd[1]: Starting systemd-hwdb-update.service... Jul 15 11:37:03.874374 systemd[1]: Starting systemd-journal-flush.service... Jul 15 11:37:03.875311 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:37:03.878830 systemd-journald[1023]: Time spent on flushing to /var/log/journal/3cb734dde5ed45c5ad2d183f8edecbd5 is 18.535ms for 1095 entries. Jul 15 11:37:03.878830 systemd-journald[1023]: System Journal (/var/log/journal/3cb734dde5ed45c5ad2d183f8edecbd5) is 8.0M, max 195.6M, 187.6M free. Jul 15 11:37:03.915359 systemd-journald[1023]: Received client request to flush runtime journal. Jul 15 11:37:03.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.876207 systemd[1]: Starting systemd-random-seed.service... Jul 15 11:37:03.877073 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:37:03.877875 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:37:03.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:03.880693 systemd[1]: Starting systemd-sysusers.service... Jul 15 11:37:03.918582 udevadm[1064]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 15 11:37:03.884123 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:37:03.885088 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 15 11:37:03.886123 systemd[1]: Mounted sys-kernel-config.mount. Jul 15 11:37:03.887904 systemd[1]: Starting systemd-udev-settle.service... Jul 15 11:37:03.889211 systemd[1]: Finished systemd-random-seed.service. Jul 15 11:37:03.890242 systemd[1]: Reached target first-boot-complete.target. Jul 15 11:37:03.894541 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:37:03.895953 systemd[1]: Finished systemd-sysusers.service. Jul 15 11:37:03.897882 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:37:03.916133 systemd[1]: Finished systemd-journal-flush.service. Jul 15 11:37:03.917359 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:37:04.288316 systemd[1]: Finished systemd-hwdb-update.service. Jul 15 11:37:04.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.290402 systemd[1]: Starting systemd-udevd.service... Jul 15 11:37:04.305594 systemd-udevd[1075]: Using default interface naming scheme 'v252'. Jul 15 11:37:04.317309 systemd[1]: Started systemd-udevd.service. Jul 15 11:37:04.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.320364 systemd[1]: Starting systemd-networkd.service... Jul 15 11:37:04.324474 systemd[1]: Starting systemd-userdbd.service... Jul 15 11:37:04.343621 systemd[1]: Found device dev-ttyS0.device. Jul 15 11:37:04.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.364036 systemd[1]: Started systemd-userdbd.service. Jul 15 11:37:04.378881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:37:04.386835 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 15 11:37:04.390748 kernel: ACPI: button: Power Button [PWRF] Jul 15 11:37:04.400000 audit[1091]: AVC avc: denied { confidentiality } for pid=1091 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 15 11:37:04.412828 systemd-networkd[1089]: lo: Link UP Jul 15 11:37:04.413061 systemd-networkd[1089]: lo: Gained carrier Jul 15 11:37:04.413470 systemd-networkd[1089]: Enumeration completed Jul 15 11:37:04.413624 systemd-networkd[1089]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:37:04.413633 systemd[1]: Started systemd-networkd.service. Jul 15 11:37:04.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.415475 systemd-networkd[1089]: eth0: Link UP Jul 15 11:37:04.415546 systemd-networkd[1089]: eth0: Gained carrier Jul 15 11:37:04.424821 systemd-networkd[1089]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:37:04.400000 audit[1091]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b941b7c2e0 a1=338ac a2=7f17a22b6bc5 a3=5 items=110 ppid=1075 pid=1091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:04.400000 audit: CWD cwd="/" Jul 15 11:37:04.400000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=1 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=2 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=3 name=(null) inode=15918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=4 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=5 name=(null) inode=15919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=6 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.440751 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 15 11:37:04.444371 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 11:37:04.444486 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 15 11:37:04.444594 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 11:37:04.400000 audit: PATH item=7 name=(null) inode=15920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=8 name=(null) inode=15920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=9 name=(null) inode=15921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=10 name=(null) inode=15920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=11 name=(null) inode=15922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=12 name=(null) inode=15920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=13 name=(null) inode=15923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=14 name=(null) inode=15920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=15 name=(null) inode=15924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=16 name=(null) inode=15920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=17 name=(null) inode=15925 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=18 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=19 name=(null) inode=15926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=20 name=(null) inode=15926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=21 name=(null) inode=15927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=22 name=(null) inode=15926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=23 name=(null) inode=15928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=24 name=(null) inode=15926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=25 name=(null) inode=15929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=26 name=(null) inode=15926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=27 name=(null) inode=15930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=28 name=(null) inode=15926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=29 name=(null) inode=15931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=30 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=31 name=(null) inode=15932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=32 name=(null) inode=15932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=33 name=(null) inode=15933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=34 name=(null) inode=15932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=35 name=(null) inode=15934 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=36 name=(null) inode=15932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=37 name=(null) inode=15935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=38 name=(null) inode=15932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=39 name=(null) inode=15936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=40 name=(null) inode=15932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=41 name=(null) inode=15937 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=42 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=43 name=(null) inode=15938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=44 name=(null) inode=15938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=45 name=(null) inode=15939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=46 name=(null) inode=15938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=47 name=(null) inode=15940 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=48 name=(null) inode=15938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=49 name=(null) inode=15941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=50 name=(null) inode=15938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=51 name=(null) inode=15942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=52 name=(null) inode=15938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=53 name=(null) inode=15943 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=55 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=56 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=57 name=(null) inode=15945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=58 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=59 name=(null) inode=15946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=60 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=61 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=62 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=63 name=(null) inode=15948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=64 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=65 name=(null) inode=15949 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=66 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=67 name=(null) inode=15950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=68 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=69 name=(null) inode=15951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=70 name=(null) inode=15947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=71 name=(null) inode=15952 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=72 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=73 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=74 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=75 name=(null) inode=15954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=76 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=77 name=(null) inode=15955 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=78 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=79 name=(null) inode=15956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=80 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=81 name=(null) inode=15957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=82 name=(null) inode=15953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=83 name=(null) inode=15958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=84 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=85 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=86 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=87 name=(null) inode=15960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=88 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=89 name=(null) inode=15961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=90 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=91 name=(null) inode=15962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=92 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=93 name=(null) inode=15963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=94 name=(null) inode=15959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=95 name=(null) inode=15964 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=96 name=(null) inode=15944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=97 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=98 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=99 name=(null) inode=15966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=100 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=101 name=(null) inode=15967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=102 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=103 name=(null) inode=15968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=104 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=105 name=(null) inode=15969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=106 name=(null) inode=15965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=107 name=(null) inode=15970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PATH item=109 name=(null) inode=14538 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:37:04.400000 audit: PROCTITLE proctitle="(udev-worker)" Jul 15 11:37:04.459736 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 11:37:04.471729 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 11:37:04.484036 kernel: kvm: Nested Virtualization enabled Jul 15 11:37:04.484087 kernel: SVM: kvm: Nested Paging enabled Jul 15 11:37:04.484102 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 15 11:37:04.484118 kernel: SVM: Virtual GIF supported Jul 15 11:37:04.500823 kernel: EDAC MC: Ver: 3.0.0 Jul 15 11:37:04.524060 systemd[1]: Finished systemd-udev-settle.service. Jul 15 11:37:04.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.526005 systemd[1]: Starting lvm2-activation-early.service... Jul 15 11:37:04.533199 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:37:04.558874 systemd[1]: Finished lvm2-activation-early.service. Jul 15 11:37:04.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.559902 systemd[1]: Reached target cryptsetup.target. Jul 15 11:37:04.561770 systemd[1]: Starting lvm2-activation.service... Jul 15 11:37:04.565765 lvm[1114]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:37:04.591346 systemd[1]: Finished lvm2-activation.service. Jul 15 11:37:04.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.592248 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:37:04.593063 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 11:37:04.593085 systemd[1]: Reached target local-fs.target. Jul 15 11:37:04.593856 systemd[1]: Reached target machines.target. Jul 15 11:37:04.595554 systemd[1]: Starting ldconfig.service... Jul 15 11:37:04.596467 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:37:04.596507 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:37:04.597351 systemd[1]: Starting systemd-boot-update.service... Jul 15 11:37:04.598876 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 15 11:37:04.600797 systemd[1]: Starting systemd-machine-id-commit.service... Jul 15 11:37:04.602617 systemd[1]: Starting systemd-sysext.service... Jul 15 11:37:04.603689 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1117 (bootctl) Jul 15 11:37:04.604578 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 15 11:37:04.608013 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 15 11:37:04.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.619451 systemd[1]: Unmounting usr-share-oem.mount... Jul 15 11:37:04.626418 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 15 11:37:04.626749 systemd[1]: Unmounted usr-share-oem.mount. Jul 15 11:37:04.636740 kernel: loop0: detected capacity change from 0 to 221472 Jul 15 11:37:04.639623 systemd-fsck[1125]: fsck.fat 4.2 (2021-01-31) Jul 15 11:37:04.639623 systemd-fsck[1125]: /dev/vda1: 791 files, 120745/258078 clusters Jul 15 11:37:04.642041 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 15 11:37:04.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.644995 systemd[1]: Mounting boot.mount... Jul 15 11:37:04.654470 systemd[1]: Mounted boot.mount. Jul 15 11:37:04.862869 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 11:37:04.862090 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 11:37:04.863462 systemd[1]: Finished systemd-machine-id-commit.service. Jul 15 11:37:04.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.864780 systemd[1]: Finished systemd-boot-update.service. Jul 15 11:37:04.878734 kernel: loop1: detected capacity change from 0 to 221472 Jul 15 11:37:04.881854 (sd-sysext)[1139]: Using extensions 'kubernetes'. Jul 15 11:37:04.882272 (sd-sysext)[1139]: Merged extensions into '/usr'. Jul 15 11:37:04.896836 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:04.898100 systemd[1]: Mounting usr-share-oem.mount... Jul 15 11:37:04.898999 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:37:04.900059 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:37:04.901585 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:37:04.903135 systemd[1]: Starting modprobe@loop.service... Jul 15 11:37:04.903922 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:37:04.904043 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:37:04.904146 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:04.906634 systemd[1]: Mounted usr-share-oem.mount. Jul 15 11:37:04.907837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:37:04.908032 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:37:04.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:04.909429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:37:04.909534 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:37:04.910859 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:37:04.910971 systemd[1]: Finished modprobe@loop.service. Jul 15 11:37:04.912212 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:37:04.912293 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:37:04.913295 systemd[1]: Finished systemd-sysext.service. Jul 15 11:37:04.916565 systemd[1]: Starting ensure-sysext.service... Jul 15 11:37:04.918255 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 15 11:37:04.919883 ldconfig[1116]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 11:37:04.922273 systemd[1]: Reloading. Jul 15 11:37:04.926440 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 15 11:37:04.927396 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 11:37:04.928691 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 11:37:04.967809 /usr/lib/systemd/system-generators/torcx-generator[1173]: time="2025-07-15T11:37:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:37:04.968137 /usr/lib/systemd/system-generators/torcx-generator[1173]: time="2025-07-15T11:37:04Z" level=info msg="torcx already run" Jul 15 11:37:05.038385 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:37:05.038401 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:37:05.054611 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:37:05.108265 systemd[1]: Finished ldconfig.service. Jul 15 11:37:05.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.110150 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 15 11:37:05.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.112974 systemd[1]: Starting audit-rules.service... Jul 15 11:37:05.114661 systemd[1]: Starting clean-ca-certificates.service... Jul 15 11:37:05.116499 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 15 11:37:05.118625 systemd[1]: Starting systemd-resolved.service... Jul 15 11:37:05.120501 systemd[1]: Starting systemd-timesyncd.service... Jul 15 11:37:05.122187 systemd[1]: Starting systemd-update-utmp.service... Jul 15 11:37:05.123530 systemd[1]: Finished clean-ca-certificates.service. Jul 15 11:37:05.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.128000 audit[1235]: SYSTEM_BOOT pid=1235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.131409 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:05.131658 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.132864 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:37:05.134969 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:37:05.136849 systemd[1]: Starting modprobe@loop.service... Jul 15 11:37:05.137589 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.137758 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:37:05.137897 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:37:05.138008 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:05.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.139385 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 15 11:37:05.142223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:37:05.142376 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:37:05.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.143616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:37:05.143800 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:37:05.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.145170 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:37:05.145310 systemd[1]: Finished modprobe@loop.service. Jul 15 11:37:05.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.148918 systemd[1]: Finished systemd-update-utmp.service. Jul 15 11:37:05.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:05.150724 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:05.150913 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.152152 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:37:05.153808 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:37:05.153000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:37:05.153000 audit[1255]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffccbb5cf00 a2=420 a3=0 items=0 ppid=1223 pid=1255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:05.153000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 15 11:37:05.155093 augenrules[1255]: No rules Jul 15 11:37:05.156438 systemd[1]: Starting modprobe@loop.service... Jul 15 11:37:05.157240 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.157361 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:37:05.158615 systemd[1]: Starting systemd-update-done.service... Jul 15 11:37:05.159426 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:37:05.159510 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:05.160535 systemd[1]: Finished audit-rules.service. Jul 15 11:37:05.161570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:37:05.161704 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:37:05.162991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:37:05.163142 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:37:05.165213 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:37:05.165444 systemd[1]: Finished modprobe@loop.service. Jul 15 11:37:05.167227 systemd[1]: Finished systemd-update-done.service. Jul 15 11:37:05.168349 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:37:05.168432 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.172323 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:05.172530 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.173595 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:37:05.175413 systemd[1]: Starting modprobe@drm.service... Jul 15 11:37:05.177216 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:37:05.178902 systemd[1]: Starting modprobe@loop.service... Jul 15 11:37:05.179661 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.179798 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:37:05.180837 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 15 11:37:05.181824 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:37:05.181917 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:37:05.183011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:37:05.183140 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:37:05.186905 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:37:05.187027 systemd[1]: Finished modprobe@drm.service. Jul 15 11:37:05.188211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:37:05.188331 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:37:05.189434 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:37:05.189667 systemd[1]: Finished modprobe@loop.service. Jul 15 11:37:05.190859 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:37:05.190940 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.192863 systemd[1]: Finished ensure-sysext.service. Jul 15 11:37:05.192905 systemd-resolved[1229]: Positive Trust Anchors: Jul 15 11:37:05.192912 systemd-resolved[1229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:37:05.192937 systemd-resolved[1229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:37:05.200128 systemd-resolved[1229]: Defaulting to hostname 'linux'. Jul 15 11:37:05.201427 systemd[1]: Started systemd-resolved.service. Jul 15 11:37:05.202290 systemd[1]: Reached target network.target. Jul 15 11:37:05.203042 systemd[1]: Reached target nss-lookup.target. Jul 15 11:37:05.209363 systemd[1]: Started systemd-timesyncd.service. Jul 15 11:37:05.210380 systemd[1]: Reached target sysinit.target. Jul 15 11:37:05.628584 systemd-resolved[1229]: Clock change detected. Flushing caches. Jul 15 11:37:05.628608 systemd-timesyncd[1230]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 11:37:05.628642 systemd-timesyncd[1230]: Initial clock synchronization to Tue 2025-07-15 11:37:05.628547 UTC. Jul 15 11:37:05.628794 systemd[1]: Started motdgen.path. Jul 15 11:37:05.629490 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 15 11:37:05.630548 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 15 11:37:05.631377 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 11:37:05.631394 systemd[1]: Reached target paths.target. Jul 15 11:37:05.632093 systemd[1]: Reached target time-set.target. Jul 15 11:37:05.632970 systemd[1]: Started logrotate.timer. Jul 15 11:37:05.633789 systemd[1]: Started mdadm.timer. Jul 15 11:37:05.634499 systemd[1]: Reached target timers.target. Jul 15 11:37:05.635474 systemd[1]: Listening on dbus.socket. Jul 15 11:37:05.637142 systemd[1]: Starting docker.socket... Jul 15 11:37:05.638719 systemd[1]: Listening on sshd.socket. Jul 15 11:37:05.639527 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:37:05.639789 systemd[1]: Listening on docker.socket. Jul 15 11:37:05.640636 systemd[1]: Reached target sockets.target. Jul 15 11:37:05.641407 systemd[1]: Reached target basic.target. Jul 15 11:37:05.642234 systemd[1]: System is tainted: cgroupsv1 Jul 15 11:37:05.642289 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.642306 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:37:05.643118 systemd[1]: Starting containerd.service... Jul 15 11:37:05.644710 systemd[1]: Starting dbus.service... Jul 15 11:37:05.646275 systemd[1]: Starting enable-oem-cloudinit.service... Jul 15 11:37:05.648053 systemd[1]: Starting extend-filesystems.service... Jul 15 11:37:05.651316 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 15 11:37:05.652261 systemd[1]: Starting motdgen.service... Jul 15 11:37:05.654314 systemd[1]: Starting prepare-helm.service... Jul 15 11:37:05.661953 jq[1286]: false Jul 15 11:37:05.656006 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 15 11:37:05.657788 systemd[1]: Starting sshd-keygen.service... Jul 15 11:37:05.660200 systemd[1]: Starting systemd-logind.service... Jul 15 11:37:05.662522 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:37:05.673483 extend-filesystems[1287]: Found loop1 Jul 15 11:37:05.673483 extend-filesystems[1287]: Found sr0 Jul 15 11:37:05.673483 extend-filesystems[1287]: Found vda Jul 15 11:37:05.673483 extend-filesystems[1287]: Found vda1 Jul 15 11:37:05.673483 extend-filesystems[1287]: Found vda2 Jul 15 11:37:05.673483 extend-filesystems[1287]: Found vda3 Jul 15 11:37:05.673483 extend-filesystems[1287]: Found usr Jul 15 11:37:05.673483 extend-filesystems[1287]: Found vda4 Jul 15 11:37:05.673483 extend-filesystems[1287]: Found vda6 Jul 15 11:37:05.673483 extend-filesystems[1287]: Found vda7 Jul 15 11:37:05.673483 extend-filesystems[1287]: Found vda9 Jul 15 11:37:05.673483 extend-filesystems[1287]: Checking size of /dev/vda9 Jul 15 11:37:05.663410 dbus-daemon[1284]: [system] SELinux support is enabled Jul 15 11:37:05.662581 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 11:37:05.699727 extend-filesystems[1287]: Resized partition /dev/vda9 Jul 15 11:37:05.663501 systemd[1]: Starting update-engine.service... Jul 15 11:37:05.701436 jq[1302]: true Jul 15 11:37:05.665186 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 15 11:37:05.666624 systemd[1]: Started dbus.service. Jul 15 11:37:05.703021 extend-filesystems[1329]: resize2fs 1.46.5 (30-Dec-2021) Jul 15 11:37:05.704219 tar[1307]: linux-amd64/helm Jul 15 11:37:05.669754 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 11:37:05.704643 jq[1313]: true Jul 15 11:37:05.672100 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 15 11:37:05.672816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 11:37:05.673874 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 15 11:37:05.676286 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 11:37:05.676313 systemd[1]: Reached target system-config.target. Jul 15 11:37:05.677629 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 11:37:05.677646 systemd[1]: Reached target user-config.target. Jul 15 11:37:05.683628 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 11:37:05.683817 systemd[1]: Finished motdgen.service. Jul 15 11:37:05.719260 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 11:37:05.719340 update_engine[1300]: I0715 11:37:05.717393 1300 main.cc:92] Flatcar Update Engine starting Jul 15 11:37:05.722060 systemd[1]: Started update-engine.service. Jul 15 11:37:05.722740 update_engine[1300]: I0715 11:37:05.722100 1300 update_check_scheduler.cc:74] Next update check in 10m5s Jul 15 11:37:05.724966 systemd[1]: Started locksmithd.service. Jul 15 11:37:05.725539 env[1314]: time="2025-07-15T11:37:05.725226675Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 15 11:37:05.747278 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 11:37:05.769158 env[1314]: time="2025-07-15T11:37:05.751503094Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 11:37:05.769587 systemd-logind[1296]: Watching system buttons on /dev/input/event1 (Power Button) Jul 15 11:37:05.769614 systemd-logind[1296]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 11:37:05.770383 extend-filesystems[1329]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 11:37:05.770383 extend-filesystems[1329]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 11:37:05.770383 extend-filesystems[1329]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 11:37:05.770027 systemd-logind[1296]: New seat seat0. Jul 15 11:37:05.777304 extend-filesystems[1287]: Resized filesystem in /dev/vda9 Jul 15 11:37:05.770667 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 11:37:05.770895 systemd[1]: Finished extend-filesystems.service. Jul 15 11:37:05.777083 systemd[1]: Started systemd-logind.service. Jul 15 11:37:05.779304 env[1314]: time="2025-07-15T11:37:05.779265749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:37:05.780425 env[1314]: time="2025-07-15T11:37:05.780399053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:37:05.780505 env[1314]: time="2025-07-15T11:37:05.780487710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:37:05.780837 env[1314]: time="2025-07-15T11:37:05.780818430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:37:05.780921 env[1314]: time="2025-07-15T11:37:05.780904631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 11:37:05.781019 env[1314]: time="2025-07-15T11:37:05.781001362Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 15 11:37:05.781108 env[1314]: time="2025-07-15T11:37:05.781090249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 11:37:05.781320 env[1314]: time="2025-07-15T11:37:05.781265267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:37:05.781608 env[1314]: time="2025-07-15T11:37:05.781592771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:37:05.781650 bash[1345]: Updated "/home/core/.ssh/authorized_keys" Jul 15 11:37:05.781979 env[1314]: time="2025-07-15T11:37:05.781961042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:37:05.782130 env[1314]: time="2025-07-15T11:37:05.782112836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 11:37:05.782284 env[1314]: time="2025-07-15T11:37:05.782268197Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 15 11:37:05.782335 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 15 11:37:05.782934 env[1314]: time="2025-07-15T11:37:05.782353537Z" level=info msg="metadata content store policy set" policy=shared Jul 15 11:37:05.789471 env[1314]: time="2025-07-15T11:37:05.789438459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 11:37:05.789529 env[1314]: time="2025-07-15T11:37:05.789476400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 11:37:05.789529 env[1314]: time="2025-07-15T11:37:05.789493532Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 11:37:05.789660 env[1314]: time="2025-07-15T11:37:05.789614629Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 11:37:05.789660 env[1314]: time="2025-07-15T11:37:05.789631441Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.789645988Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.789779348Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.789792743Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.789814674Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.789826246Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.789838128Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.789849760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.789926374Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.789988300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.790343165Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.790371809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.790383120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.790427924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792354 env[1314]: time="2025-07-15T11:37:05.790439085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.791712 systemd[1]: Started containerd.service. Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790454744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790464192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790475814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790486574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790496953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790507273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790524585Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790629041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790642125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790652665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790662674Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790674556Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790685437Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 11:37:05.792722 env[1314]: time="2025-07-15T11:37:05.790701657Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 15 11:37:05.792464 locksmithd[1346]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 11:37:05.793148 env[1314]: time="2025-07-15T11:37:05.790732495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 11:37:05.793181 env[1314]: time="2025-07-15T11:37:05.790897504Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 11:37:05.793181 env[1314]: time="2025-07-15T11:37:05.790942028Z" level=info msg="Connect containerd service" Jul 15 11:37:05.793181 env[1314]: time="2025-07-15T11:37:05.790970521Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 11:37:05.793181 env[1314]: time="2025-07-15T11:37:05.791393694Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:37:05.793181 env[1314]: time="2025-07-15T11:37:05.791584863Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 11:37:05.793181 env[1314]: time="2025-07-15T11:37:05.791613927Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 11:37:05.793181 env[1314]: time="2025-07-15T11:37:05.791647610Z" level=info msg="containerd successfully booted in 0.067790s" Jul 15 11:37:05.794768 env[1314]: time="2025-07-15T11:37:05.794739817Z" level=info msg="Start subscribing containerd event" Jul 15 11:37:05.795064 env[1314]: time="2025-07-15T11:37:05.795045120Z" level=info msg="Start recovering state" Jul 15 11:37:05.795306 env[1314]: time="2025-07-15T11:37:05.795291411Z" level=info msg="Start event monitor" Jul 15 11:37:05.795521 env[1314]: time="2025-07-15T11:37:05.795362705Z" level=info msg="Start snapshots syncer" Jul 15 11:37:05.795593 env[1314]: time="2025-07-15T11:37:05.795575714Z" level=info msg="Start cni network conf syncer for default" Jul 15 11:37:05.795671 env[1314]: time="2025-07-15T11:37:05.795653630Z" level=info msg="Start streaming server" Jul 15 11:37:06.071655 tar[1307]: linux-amd64/LICENSE Jul 15 11:37:06.071655 tar[1307]: linux-amd64/README.md Jul 15 11:37:06.075774 systemd[1]: Finished prepare-helm.service. Jul 15 11:37:06.172781 sshd_keygen[1308]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 11:37:06.190237 systemd[1]: Finished sshd-keygen.service. Jul 15 11:37:06.192468 systemd[1]: Starting issuegen.service... Jul 15 11:37:06.198188 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 11:37:06.198386 systemd[1]: Finished issuegen.service. Jul 15 11:37:06.200264 systemd[1]: Starting systemd-user-sessions.service... Jul 15 11:37:06.204953 systemd[1]: Finished systemd-user-sessions.service. Jul 15 11:37:06.206878 systemd[1]: Started getty@tty1.service. Jul 15 11:37:06.208572 systemd[1]: Started serial-getty@ttyS0.service. Jul 15 11:37:06.209556 systemd[1]: Reached target getty.target. Jul 15 11:37:06.791429 systemd-networkd[1089]: eth0: Gained IPv6LL Jul 15 11:37:06.793021 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 15 11:37:06.794324 systemd[1]: Reached target network-online.target. Jul 15 11:37:06.796676 systemd[1]: Starting kubelet.service... Jul 15 11:37:07.455964 systemd[1]: Started kubelet.service. Jul 15 11:37:07.457422 systemd[1]: Reached target multi-user.target. Jul 15 11:37:07.459907 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 15 11:37:07.466947 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 15 11:37:07.467219 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 15 11:37:07.469698 systemd[1]: Startup finished in 5.321s (kernel) + 5.661s (userspace) = 10.982s. Jul 15 11:37:07.853342 kubelet[1388]: E0715 11:37:07.853275 1388 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:37:07.854998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:37:07.855151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:37:09.567618 systemd[1]: Created slice system-sshd.slice. Jul 15 11:37:09.568612 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:38234.service. Jul 15 11:37:09.605372 sshd[1398]: Accepted publickey for core from 10.0.0.1 port 38234 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:37:09.606685 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:37:09.614332 systemd-logind[1296]: New session 1 of user core. Jul 15 11:37:09.615026 systemd[1]: Created slice user-500.slice. Jul 15 11:37:09.615823 systemd[1]: Starting user-runtime-dir@500.service... Jul 15 11:37:09.624634 systemd[1]: Finished user-runtime-dir@500.service. Jul 15 11:37:09.625619 systemd[1]: Starting user@500.service... Jul 15 11:37:09.628013 (systemd)[1403]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:37:09.699379 systemd[1403]: Queued start job for default target default.target. Jul 15 11:37:09.699574 systemd[1403]: Reached target paths.target. Jul 15 11:37:09.699589 systemd[1403]: Reached target sockets.target. Jul 15 11:37:09.699600 systemd[1403]: Reached target timers.target. Jul 15 11:37:09.699611 systemd[1403]: Reached target basic.target. Jul 15 11:37:09.699645 systemd[1403]: Reached target default.target. Jul 15 11:37:09.699672 systemd[1403]: Startup finished in 66ms. Jul 15 11:37:09.699792 systemd[1]: Started user@500.service. Jul 15 11:37:09.700728 systemd[1]: Started session-1.scope. Jul 15 11:37:09.749096 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:38236.service. Jul 15 11:37:09.783434 sshd[1412]: Accepted publickey for core from 10.0.0.1 port 38236 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:37:09.784405 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:37:09.787635 systemd-logind[1296]: New session 2 of user core. Jul 15 11:37:09.788350 systemd[1]: Started session-2.scope. Jul 15 11:37:09.839680 sshd[1412]: pam_unix(sshd:session): session closed for user core Jul 15 11:37:09.841980 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:38250.service. Jul 15 11:37:09.842440 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:38236.service: Deactivated successfully. Jul 15 11:37:09.843181 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 11:37:09.843205 systemd-logind[1296]: Session 2 logged out. Waiting for processes to exit. Jul 15 11:37:09.843998 systemd-logind[1296]: Removed session 2. Jul 15 11:37:09.874844 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 38250 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:37:09.875643 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:37:09.878601 systemd-logind[1296]: New session 3 of user core. Jul 15 11:37:09.879232 systemd[1]: Started session-3.scope. Jul 15 11:37:09.927121 sshd[1418]: pam_unix(sshd:session): session closed for user core Jul 15 11:37:09.929265 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:38252.service. Jul 15 11:37:09.929634 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:38250.service: Deactivated successfully. Jul 15 11:37:09.930736 systemd-logind[1296]: Session 3 logged out. Waiting for processes to exit. Jul 15 11:37:09.930805 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 11:37:09.931677 systemd-logind[1296]: Removed session 3. Jul 15 11:37:09.962442 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 38252 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:37:09.963381 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:37:09.966411 systemd-logind[1296]: New session 4 of user core. Jul 15 11:37:09.967066 systemd[1]: Started session-4.scope. Jul 15 11:37:10.018796 sshd[1424]: pam_unix(sshd:session): session closed for user core Jul 15 11:37:10.021176 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:38268.service. Jul 15 11:37:10.021563 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:38252.service: Deactivated successfully. Jul 15 11:37:10.022409 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 11:37:10.022499 systemd-logind[1296]: Session 4 logged out. Waiting for processes to exit. Jul 15 11:37:10.023187 systemd-logind[1296]: Removed session 4. Jul 15 11:37:10.054436 sshd[1432]: Accepted publickey for core from 10.0.0.1 port 38268 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:37:10.055375 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:37:10.058360 systemd-logind[1296]: New session 5 of user core. Jul 15 11:37:10.059006 systemd[1]: Started session-5.scope. Jul 15 11:37:10.111991 sudo[1437]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 11:37:10.112184 sudo[1437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:37:10.121440 dbus-daemon[1284]: \xd0\xfd\xe6\xea8V: received setenforce notice (enforcing=255966640) Jul 15 11:37:10.123503 sudo[1437]: pam_unix(sudo:session): session closed for user root Jul 15 11:37:10.125167 sshd[1432]: pam_unix(sshd:session): session closed for user core Jul 15 11:37:10.127803 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:38282.service. Jul 15 11:37:10.128301 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:38268.service: Deactivated successfully. Jul 15 11:37:10.129194 systemd-logind[1296]: Session 5 logged out. Waiting for processes to exit. Jul 15 11:37:10.129197 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 11:37:10.130209 systemd-logind[1296]: Removed session 5. Jul 15 11:37:10.163346 sshd[1439]: Accepted publickey for core from 10.0.0.1 port 38282 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:37:10.164333 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:37:10.167280 systemd-logind[1296]: New session 6 of user core. Jul 15 11:37:10.167957 systemd[1]: Started session-6.scope. Jul 15 11:37:10.220597 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 11:37:10.220784 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:37:10.223156 sudo[1446]: pam_unix(sudo:session): session closed for user root Jul 15 11:37:10.227528 sudo[1445]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 15 11:37:10.227719 sudo[1445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:37:10.235902 systemd[1]: Stopping audit-rules.service... Jul 15 11:37:10.235000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 15 11:37:10.237292 auditctl[1449]: No rules Jul 15 11:37:10.237529 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 11:37:10.237709 systemd[1]: Stopped audit-rules.service. Jul 15 11:37:10.242269 kernel: kauditd_printk_skb: 215 callbacks suppressed Jul 15 11:37:10.242311 kernel: audit: type=1305 audit(1752579430.235:143): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 15 11:37:10.242335 kernel: audit: type=1300 audit(1752579430.235:143): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbe4bc760 a2=420 a3=0 items=0 ppid=1 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:10.235000 audit[1449]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbe4bc760 a2=420 a3=0 items=0 ppid=1 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:10.238980 systemd[1]: Starting audit-rules.service... Jul 15 11:37:10.235000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 15 11:37:10.246126 kernel: audit: type=1327 audit(1752579430.235:143): proctitle=2F7362696E2F617564697463746C002D44 Jul 15 11:37:10.246149 kernel: audit: type=1131 audit(1752579430.235:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.254854 augenrules[1467]: No rules Jul 15 11:37:10.255431 systemd[1]: Finished audit-rules.service. Jul 15 11:37:10.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.255000 audit[1445]: USER_END pid=1445 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.256375 sudo[1445]: pam_unix(sudo:session): session closed for user root Jul 15 11:37:10.259367 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:38296.service. Jul 15 11:37:10.258264 sshd[1439]: pam_unix(sshd:session): session closed for user core Jul 15 11:37:10.262535 kernel: audit: type=1130 audit(1752579430.254:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.262575 kernel: audit: type=1106 audit(1752579430.255:146): pid=1445 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.262599 kernel: audit: type=1104 audit(1752579430.255:147): pid=1445 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.255000 audit[1445]: CRED_DISP pid=1445 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.262634 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:38282.service: Deactivated successfully. Jul 15 11:37:10.263504 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 11:37:10.264024 systemd-logind[1296]: Session 6 logged out. Waiting for processes to exit. Jul 15 11:37:10.264777 systemd-logind[1296]: Removed session 6. Jul 15 11:37:10.265654 kernel: audit: type=1130 audit(1752579430.258:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.133:22-10.0.0.1:38296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.133:22-10.0.0.1:38296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.260000 audit[1439]: USER_END pid=1439 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:10.273323 kernel: audit: type=1106 audit(1752579430.260:149): pid=1439 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:10.273365 kernel: audit: type=1104 audit(1752579430.260:150): pid=1439 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:10.260000 audit[1439]: CRED_DISP pid=1439 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:10.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.133:22-10.0.0.1:38282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.293000 audit[1472]: USER_ACCT pid=1472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:10.294996 sshd[1472]: Accepted publickey for core from 10.0.0.1 port 38296 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:37:10.294000 audit[1472]: CRED_ACQ pid=1472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:10.294000 audit[1472]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3419cb90 a2=3 a3=0 items=0 ppid=1 pid=1472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:10.294000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:37:10.295885 sshd[1472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:37:10.299015 systemd-logind[1296]: New session 7 of user core. Jul 15 11:37:10.299698 systemd[1]: Started session-7.scope. Jul 15 11:37:10.301000 audit[1472]: USER_START pid=1472 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:10.302000 audit[1477]: CRED_ACQ pid=1477 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:10.349000 audit[1478]: USER_ACCT pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.350658 sudo[1478]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 11:37:10.349000 audit[1478]: CRED_REFR pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.350851 sudo[1478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:37:10.350000 audit[1478]: USER_START pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:10.370433 systemd[1]: Starting docker.service... Jul 15 11:37:10.402402 env[1490]: time="2025-07-15T11:37:10.402352044Z" level=info msg="Starting up" Jul 15 11:37:10.403725 env[1490]: time="2025-07-15T11:37:10.403696906Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:37:10.403725 env[1490]: time="2025-07-15T11:37:10.403713757Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:37:10.403788 env[1490]: time="2025-07-15T11:37:10.403735047Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:37:10.403788 env[1490]: time="2025-07-15T11:37:10.403744815Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:37:10.405013 env[1490]: time="2025-07-15T11:37:10.404994057Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:37:10.405013 env[1490]: time="2025-07-15T11:37:10.405008254Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:37:10.405108 env[1490]: time="2025-07-15T11:37:10.405018814Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:37:10.405108 env[1490]: time="2025-07-15T11:37:10.405025777Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:37:10.410094 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport185503040-merged.mount: Deactivated successfully. Jul 15 11:37:11.155466 env[1490]: time="2025-07-15T11:37:11.155408136Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 15 11:37:11.155466 env[1490]: time="2025-07-15T11:37:11.155433012Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 15 11:37:11.155716 env[1490]: time="2025-07-15T11:37:11.155593603Z" level=info msg="Loading containers: start." Jul 15 11:37:11.203000 audit[1524]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.203000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd6f2261c0 a2=0 a3=7ffd6f2261ac items=0 ppid=1490 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.203000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 15 11:37:11.204000 audit[1526]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.204000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd5aa72100 a2=0 a3=7ffd5aa720ec items=0 ppid=1490 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.204000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 15 11:37:11.206000 audit[1528]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.206000 audit[1528]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff2897c6f0 a2=0 a3=7fff2897c6dc items=0 ppid=1490 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.206000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 15 11:37:11.207000 audit[1530]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.207000 audit[1530]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffc5d8d910 a2=0 a3=7fffc5d8d8fc items=0 ppid=1490 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.207000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 15 11:37:11.209000 audit[1532]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.209000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe09a97040 a2=0 a3=7ffe09a9702c items=0 ppid=1490 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.209000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 15 11:37:11.222000 audit[1537]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.222000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc2d93da20 a2=0 a3=7ffc2d93da0c items=0 ppid=1490 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.222000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 15 11:37:11.231000 audit[1539]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.231000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1a159130 a2=0 a3=7ffd1a15911c items=0 ppid=1490 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.231000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 15 11:37:11.233000 audit[1541]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.233000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd00e45f40 a2=0 a3=7ffd00e45f2c items=0 ppid=1490 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.233000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 15 11:37:11.234000 audit[1543]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.234000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffdfb208060 a2=0 a3=7ffdfb20804c items=0 ppid=1490 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.234000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:37:11.241000 audit[1547]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.241000 audit[1547]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd4f724100 a2=0 a3=7ffd4f7240ec items=0 ppid=1490 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.241000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:37:11.246000 audit[1548]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.246000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc38574960 a2=0 a3=7ffc3857494c items=0 ppid=1490 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.246000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:37:11.256273 kernel: Initializing XFRM netlink socket Jul 15 11:37:11.281701 env[1490]: time="2025-07-15T11:37:11.281660353Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 15 11:37:11.297000 audit[1556]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.297000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe47f57b30 a2=0 a3=7ffe47f57b1c items=0 ppid=1490 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.297000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 15 11:37:11.307000 audit[1559]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.307000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffa58fdb50 a2=0 a3=7fffa58fdb3c items=0 ppid=1490 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.307000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 15 11:37:11.310000 audit[1562]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.310000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffed34d73f0 a2=0 a3=7ffed34d73dc items=0 ppid=1490 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.310000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 15 11:37:11.311000 audit[1564]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.311000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd4d852320 a2=0 a3=7ffd4d85230c items=0 ppid=1490 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.311000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 15 11:37:11.313000 audit[1566]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.313000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff1154bba0 a2=0 a3=7fff1154bb8c items=0 ppid=1490 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.313000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 15 11:37:11.314000 audit[1568]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.314000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffcc5c4d690 a2=0 a3=7ffcc5c4d67c items=0 ppid=1490 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.314000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 15 11:37:11.316000 audit[1570]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.316000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd4dc02d80 a2=0 a3=7ffd4dc02d6c items=0 ppid=1490 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.316000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 15 11:37:11.322000 audit[1573]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.322000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc08de66c0 a2=0 a3=7ffc08de66ac items=0 ppid=1490 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.322000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 15 11:37:11.324000 audit[1575]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.324000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffeab33aa70 a2=0 a3=7ffeab33aa5c items=0 ppid=1490 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.324000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 15 11:37:11.326000 audit[1577]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.326000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff01a67430 a2=0 a3=7fff01a6741c items=0 ppid=1490 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.326000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 15 11:37:11.327000 audit[1579]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.327000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd5edc6330 a2=0 a3=7ffd5edc631c items=0 ppid=1490 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.327000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 15 11:37:11.329309 systemd-networkd[1089]: docker0: Link UP Jul 15 11:37:11.337000 audit[1583]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.337000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff70fafb10 a2=0 a3=7fff70fafafc items=0 ppid=1490 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.337000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:37:11.342000 audit[1584]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:11.342000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffeb93f5950 a2=0 a3=7ffeb93f593c items=0 ppid=1490 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:11.342000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 15 11:37:11.344511 env[1490]: time="2025-07-15T11:37:11.344474417Z" level=info msg="Loading containers: done." Jul 15 11:37:11.358042 env[1490]: time="2025-07-15T11:37:11.357984033Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 11:37:11.358168 env[1490]: time="2025-07-15T11:37:11.358158190Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 15 11:37:11.358265 env[1490]: time="2025-07-15T11:37:11.358233902Z" level=info msg="Daemon has completed initialization" Jul 15 11:37:11.374416 systemd[1]: Started docker.service. Jul 15 11:37:11.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:11.378119 env[1490]: time="2025-07-15T11:37:11.378073233Z" level=info msg="API listen on /run/docker.sock" Jul 15 11:37:12.067959 env[1314]: time="2025-07-15T11:37:12.067915251Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 11:37:12.701137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509187637.mount: Deactivated successfully. Jul 15 11:37:14.037390 env[1314]: time="2025-07-15T11:37:14.037333947Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:14.039079 env[1314]: time="2025-07-15T11:37:14.039040476Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:14.040843 env[1314]: time="2025-07-15T11:37:14.040813210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:14.042470 env[1314]: time="2025-07-15T11:37:14.042433597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:14.043114 env[1314]: time="2025-07-15T11:37:14.043080700Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 15 11:37:14.043667 env[1314]: time="2025-07-15T11:37:14.043635641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 11:37:15.562439 env[1314]: time="2025-07-15T11:37:15.562368052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:15.564352 env[1314]: time="2025-07-15T11:37:15.564313088Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:15.566280 env[1314]: time="2025-07-15T11:37:15.566211867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:15.568073 env[1314]: time="2025-07-15T11:37:15.568029996Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:15.568798 env[1314]: time="2025-07-15T11:37:15.568764923Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 15 11:37:15.569195 env[1314]: time="2025-07-15T11:37:15.569172958Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 11:37:17.027107 env[1314]: time="2025-07-15T11:37:17.027064855Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:17.029116 env[1314]: time="2025-07-15T11:37:17.029070365Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:17.030917 env[1314]: time="2025-07-15T11:37:17.030885137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:17.032722 env[1314]: time="2025-07-15T11:37:17.032680081Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:17.033261 env[1314]: time="2025-07-15T11:37:17.033217289Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 15 11:37:17.033683 env[1314]: time="2025-07-15T11:37:17.033661291Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 11:37:18.049591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 11:37:18.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:18.049857 systemd[1]: Stopped kubelet.service. Jul 15 11:37:18.050681 kernel: kauditd_printk_skb: 84 callbacks suppressed Jul 15 11:37:18.050735 kernel: audit: type=1130 audit(1752579438.048:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:18.051706 systemd[1]: Starting kubelet.service... Jul 15 11:37:18.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:18.056399 kernel: audit: type=1131 audit(1752579438.048:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:18.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:18.209726 systemd[1]: Started kubelet.service. Jul 15 11:37:18.214309 kernel: audit: type=1130 audit(1752579438.208:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:18.546740 kubelet[1630]: E0715 11:37:18.546593 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:37:18.550365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:37:18.550499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:37:18.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 15 11:37:18.554267 kernel: audit: type=1131 audit(1752579438.549:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 15 11:37:19.382894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251999380.mount: Deactivated successfully. Jul 15 11:37:20.410315 env[1314]: time="2025-07-15T11:37:20.410222127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:20.413254 env[1314]: time="2025-07-15T11:37:20.413179701Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:20.415140 env[1314]: time="2025-07-15T11:37:20.415086606Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:20.416849 env[1314]: time="2025-07-15T11:37:20.416804576Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:20.417261 env[1314]: time="2025-07-15T11:37:20.417209555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 15 11:37:20.417853 env[1314]: time="2025-07-15T11:37:20.417795404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 11:37:20.948314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082374716.mount: Deactivated successfully. Jul 15 11:37:21.844262 env[1314]: time="2025-07-15T11:37:21.844200744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:21.846120 env[1314]: time="2025-07-15T11:37:21.846067193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:21.847786 env[1314]: time="2025-07-15T11:37:21.847754807Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:21.849395 env[1314]: time="2025-07-15T11:37:21.849360507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:21.850082 env[1314]: time="2025-07-15T11:37:21.850046463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 11:37:21.850526 env[1314]: time="2025-07-15T11:37:21.850503109Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 11:37:22.320558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1736380732.mount: Deactivated successfully. Jul 15 11:37:22.325129 env[1314]: time="2025-07-15T11:37:22.325096131Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:22.326828 env[1314]: time="2025-07-15T11:37:22.326779898Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:22.328263 env[1314]: time="2025-07-15T11:37:22.328208937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:22.329591 env[1314]: time="2025-07-15T11:37:22.329565500Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:22.329965 env[1314]: time="2025-07-15T11:37:22.329925374Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 11:37:22.330393 env[1314]: time="2025-07-15T11:37:22.330368555Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 11:37:22.847840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838708802.mount: Deactivated successfully. Jul 15 11:37:25.751069 env[1314]: time="2025-07-15T11:37:25.751016559Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:25.752987 env[1314]: time="2025-07-15T11:37:25.752952137Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:25.755952 env[1314]: time="2025-07-15T11:37:25.755921574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:25.757858 env[1314]: time="2025-07-15T11:37:25.757827988Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:25.758718 env[1314]: time="2025-07-15T11:37:25.758669044Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 11:37:27.843497 systemd[1]: Stopped kubelet.service. Jul 15 11:37:27.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:27.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:27.846839 systemd[1]: Starting kubelet.service... Jul 15 11:37:27.849795 kernel: audit: type=1130 audit(1752579447.842:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:27.849890 kernel: audit: type=1131 audit(1752579447.843:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:27.870391 systemd[1]: Reloading. Jul 15 11:37:27.939437 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2025-07-15T11:37:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:37:27.939462 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2025-07-15T11:37:27Z" level=info msg="torcx already run" Jul 15 11:37:28.384349 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:37:28.384365 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:37:28.400619 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:37:28.470844 systemd[1]: Started kubelet.service. Jul 15 11:37:28.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:28.473184 systemd[1]: Stopping kubelet.service... Jul 15 11:37:28.473454 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:37:28.473669 systemd[1]: Stopped kubelet.service. Jul 15 11:37:28.474956 systemd[1]: Starting kubelet.service... Jul 15 11:37:28.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:28.479231 kernel: audit: type=1130 audit(1752579448.470:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:28.479297 kernel: audit: type=1131 audit(1752579448.472:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:28.556855 systemd[1]: Started kubelet.service. Jul 15 11:37:28.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:28.561295 kernel: audit: type=1130 audit(1752579448.555:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:28.591207 kubelet[1750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:37:28.591207 kubelet[1750]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:37:28.591207 kubelet[1750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:37:28.591513 kubelet[1750]: I0715 11:37:28.591301 1750 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:37:29.030621 kubelet[1750]: I0715 11:37:29.030571 1750 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:37:29.030621 kubelet[1750]: I0715 11:37:29.030605 1750 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:37:29.030874 kubelet[1750]: I0715 11:37:29.030860 1750 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:37:29.046797 kubelet[1750]: E0715 11:37:29.046763 1750 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:29.047290 kubelet[1750]: I0715 11:37:29.047275 1750 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:37:29.052052 kubelet[1750]: E0715 11:37:29.052009 1750 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:37:29.052052 kubelet[1750]: I0715 11:37:29.052044 1750 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:37:29.056749 kubelet[1750]: I0715 11:37:29.056730 1750 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:37:29.057160 kubelet[1750]: I0715 11:37:29.057148 1750 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:37:29.057380 kubelet[1750]: I0715 11:37:29.057342 1750 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:37:29.057735 kubelet[1750]: I0715 11:37:29.057445 1750 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 15 11:37:29.057926 kubelet[1750]: I0715 11:37:29.057901 1750 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:37:29.057926 kubelet[1750]: I0715 11:37:29.057922 1750 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:37:29.058040 kubelet[1750]: I0715 11:37:29.058021 1750 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:37:29.066102 kubelet[1750]: I0715 11:37:29.066074 1750 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:37:29.066102 kubelet[1750]: I0715 11:37:29.066104 1750 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:37:29.066176 kubelet[1750]: I0715 11:37:29.066138 1750 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:37:29.066176 kubelet[1750]: I0715 11:37:29.066155 1750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:37:29.077521 kubelet[1750]: W0715 11:37:29.077457 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 15 11:37:29.077583 kubelet[1750]: E0715 11:37:29.077537 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:29.081771 kubelet[1750]: I0715 11:37:29.081751 1750 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:37:29.082057 kubelet[1750]: W0715 11:37:29.082005 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 15 11:37:29.082106 kubelet[1750]: E0715 11:37:29.082063 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:29.082135 kubelet[1750]: I0715 11:37:29.082111 1750 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:37:29.082175 kubelet[1750]: W0715 11:37:29.082156 1750 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 11:37:29.085449 kubelet[1750]: I0715 11:37:29.085423 1750 server.go:1274] "Started kubelet" Jul 15 11:37:29.085500 kubelet[1750]: I0715 11:37:29.085481 1750 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:37:29.086343 kubelet[1750]: I0715 11:37:29.086317 1750 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:37:29.086481 kubelet[1750]: I0715 11:37:29.086467 1750 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 15 11:37:29.086543 kubelet[1750]: I0715 11:37:29.086497 1750 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 15 11:37:29.086575 kubelet[1750]: I0715 11:37:29.086555 1750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:37:29.085000 audit[1750]: AVC avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:29.095266 kernel: audit: type=1400 audit(1752579449.085:194): avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:29.095330 kernel: audit: type=1401 audit(1752579449.085:194): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:37:29.095357 kernel: audit: type=1300 audit(1752579449.085:194): arch=c000003e syscall=188 success=no exit=-22 a0=c0008a3020 a1=c000ae2858 a2=c0008a2ff0 a3=25 items=0 ppid=1 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.085000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:37:29.085000 audit[1750]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008a3020 a1=c000ae2858 a2=c0008a2ff0 a3=25 items=0 ppid=1 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.095488 kubelet[1750]: I0715 11:37:29.091378 1750 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:37:29.095488 kubelet[1750]: I0715 11:37:29.091863 1750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:37:29.095488 kubelet[1750]: I0715 11:37:29.092051 1750 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:37:29.095488 kubelet[1750]: I0715 11:37:29.092221 1750 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:37:29.095488 kubelet[1750]: I0715 11:37:29.092325 1750 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:37:29.095488 kubelet[1750]: I0715 11:37:29.092370 1750 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:37:29.095488 kubelet[1750]: W0715 11:37:29.092603 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 15 11:37:29.095488 kubelet[1750]: E0715 11:37:29.092639 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:29.095488 kubelet[1750]: E0715 11:37:29.092695 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:37:29.095488 kubelet[1750]: E0715 11:37:29.092846 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Jul 15 11:37:29.095488 kubelet[1750]: I0715 11:37:29.093026 1750 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:37:29.101211 kernel: audit: type=1327 audit(1752579449.085:194): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:37:29.085000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:37:29.101372 kubelet[1750]: I0715 11:37:29.097266 1750 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:37:29.101372 kubelet[1750]: I0715 11:37:29.097275 1750 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:37:29.085000 audit[1750]: AVC avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:29.104658 kubelet[1750]: E0715 11:37:29.102222 1750 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:37:29.085000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:37:29.085000 audit[1750]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000163860 a1=c000ae2870 a2=c0008a30b0 a3=25 items=0 ppid=1 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.085000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:37:29.087000 audit[1764]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:29.087000 audit[1764]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc5d212610 a2=0 a3=7ffc5d2125fc items=0 ppid=1750 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 15 11:37:29.088000 audit[1765]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1765 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:29.088000 audit[1765]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcca40e100 a2=0 a3=7ffcca40e0ec items=0 ppid=1750 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 15 11:37:29.092000 audit[1767]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:29.092000 audit[1767]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc0836de70 a2=0 a3=7ffc0836de5c items=0 ppid=1750 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:37:29.094000 audit[1769]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:29.094000 audit[1769]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffec2a08850 a2=0 a3=7ffec2a0883c items=0 ppid=1750 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.094000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:37:29.105258 kernel: audit: type=1400 audit(1752579449.085:195): avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:29.105000 audit[1773]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:29.105000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffccbff4930 a2=0 a3=7ffccbff491c items=0 ppid=1750 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 15 11:37:29.106647 kubelet[1750]: I0715 11:37:29.106558 1750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:37:29.111488 kubelet[1750]: E0715 11:37:29.105309 1750 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185269b92f6282e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:37:29.085395689 +0000 UTC m=+0.524550186,LastTimestamp:2025-07-15 11:37:29.085395689 +0000 UTC m=+0.524550186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:37:29.110000 audit[1775]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:29.110000 audit[1775]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0bd544d0 a2=0 a3=7ffd0bd544bc items=0 ppid=1750 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 15 11:37:29.111000 audit[1776]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:29.111000 audit[1776]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb9d19e20 a2=0 a3=7fffb9d19e0c items=0 ppid=1750 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 15 11:37:29.112000 audit[1777]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_chain pid=1777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:29.112000 audit[1777]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb3837cc0 a2=0 a3=7fffb3837cac items=0 ppid=1750 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 15 11:37:29.114000 audit[1774]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:29.114000 audit[1774]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd7a5b6800 a2=0 a3=7ffd7a5b67ec items=0 ppid=1750 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.114000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 15 11:37:29.115839 kubelet[1750]: I0715 11:37:29.115643 1750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:37:29.115887 kubelet[1750]: I0715 11:37:29.115846 1750 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:37:29.115887 kubelet[1750]: I0715 11:37:29.115867 1750 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:37:29.116009 kubelet[1750]: E0715 11:37:29.115983 1750 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:37:29.115000 audit[1783]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:29.115000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd13471c0 a2=0 a3=7ffcd13471ac items=0 ppid=1750 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.115000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 15 11:37:29.116870 kubelet[1750]: W0715 11:37:29.116802 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 15 11:37:29.116870 kubelet[1750]: E0715 11:37:29.116852 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:29.117380 kubelet[1750]: I0715 11:37:29.117362 1750 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:37:29.117380 kubelet[1750]: I0715 11:37:29.117377 1750 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:37:29.117450 kubelet[1750]: I0715 11:37:29.117392 1750 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:37:29.116000 audit[1784]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:29.116000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffdce206b60 a2=0 a3=7ffdce206b4c items=0 ppid=1750 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 15 11:37:29.117000 audit[1785]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:29.117000 audit[1785]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd1ea11680 a2=0 a3=7ffd1ea1166c items=0 ppid=1750 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 15 11:37:29.192939 kubelet[1750]: E0715 11:37:29.192902 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:37:29.216287 kubelet[1750]: E0715 11:37:29.216258 1750 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:37:29.293687 kubelet[1750]: E0715 11:37:29.293601 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:37:29.293944 kubelet[1750]: E0715 11:37:29.293796 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Jul 15 11:37:29.394298 kubelet[1750]: E0715 11:37:29.394272 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:37:29.416443 kubelet[1750]: E0715 11:37:29.416382 1750 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:37:29.494943 kubelet[1750]: E0715 11:37:29.494902 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:37:29.595863 kubelet[1750]: E0715 11:37:29.595837 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:37:29.694560 kubelet[1750]: E0715 11:37:29.694509 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Jul 15 11:37:29.696648 kubelet[1750]: E0715 11:37:29.696610 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:37:29.797063 kubelet[1750]: E0715 11:37:29.797033 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:37:29.817295 kubelet[1750]: E0715 11:37:29.817236 1750 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:37:29.884695 kubelet[1750]: I0715 11:37:29.884616 1750 policy_none.go:49] "None policy: Start" Jul 15 11:37:29.885499 kubelet[1750]: I0715 11:37:29.885470 1750 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:37:29.885553 kubelet[1750]: I0715 11:37:29.885504 1750 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:37:29.891526 kubelet[1750]: I0715 11:37:29.891476 1750 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:37:29.890000 audit[1750]: AVC avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:29.890000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:37:29.890000 audit[1750]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d96e40 a1=c0008af908 a2=c000d96e10 a3=25 items=0 ppid=1 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:29.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:37:29.891832 kubelet[1750]: I0715 11:37:29.891552 1750 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 15 11:37:29.891832 kubelet[1750]: I0715 11:37:29.891656 1750 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:37:29.891832 kubelet[1750]: I0715 11:37:29.891666 1750 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:37:29.892344 kubelet[1750]: I0715 11:37:29.892329 1750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:37:29.893340 kubelet[1750]: E0715 11:37:29.893324 1750 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 11:37:29.951201 kubelet[1750]: W0715 11:37:29.951129 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 15 11:37:29.951201 kubelet[1750]: E0715 11:37:29.951188 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:29.993469 kubelet[1750]: I0715 11:37:29.993424 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:37:29.993746 kubelet[1750]: E0715 11:37:29.993727 1750 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 15 11:37:30.195701 kubelet[1750]: I0715 11:37:30.195597 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:37:30.195966 kubelet[1750]: E0715 11:37:30.195937 1750 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 15 11:37:30.272788 kubelet[1750]: W0715 11:37:30.272728 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 15 11:37:30.272788 kubelet[1750]: E0715 11:37:30.272787 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:30.385430 kubelet[1750]: W0715 11:37:30.385340 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 15 11:37:30.385430 kubelet[1750]: E0715 11:37:30.385432 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:30.393217 kubelet[1750]: W0715 11:37:30.393179 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 15 11:37:30.393300 kubelet[1750]: E0715 11:37:30.393215 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:30.495453 kubelet[1750]: E0715 11:37:30.495323 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" Jul 15 11:37:30.597295 kubelet[1750]: I0715 11:37:30.597264 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:37:30.597666 kubelet[1750]: E0715 11:37:30.597567 1750 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 15 11:37:30.664548 kubelet[1750]: E0715 11:37:30.664408 1750 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185269b92f6282e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:37:29.085395689 +0000 UTC m=+0.524550186,LastTimestamp:2025-07-15 11:37:29.085395689 +0000 UTC m=+0.524550186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:37:30.701852 kubelet[1750]: I0715 11:37:30.701813 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a5b405d611f7b304f601fcb483e4fba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a5b405d611f7b304f601fcb483e4fba\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:37:30.701852 kubelet[1750]: I0715 11:37:30.701850 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:30.701978 kubelet[1750]: I0715 11:37:30.701876 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:30.701978 kubelet[1750]: I0715 11:37:30.701895 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:30.701978 kubelet[1750]: I0715 11:37:30.701923 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:30.701978 kubelet[1750]: I0715 11:37:30.701962 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:37:30.701978 kubelet[1750]: I0715 11:37:30.701977 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a5b405d611f7b304f601fcb483e4fba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a5b405d611f7b304f601fcb483e4fba\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:37:30.702118 kubelet[1750]: I0715 11:37:30.701991 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a5b405d611f7b304f601fcb483e4fba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a5b405d611f7b304f601fcb483e4fba\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:37:30.702118 kubelet[1750]: I0715 11:37:30.702011 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:30.922953 kubelet[1750]: E0715 11:37:30.922897 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:30.923083 kubelet[1750]: E0715 11:37:30.922897 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:30.923635 env[1314]: time="2025-07-15T11:37:30.923528571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a5b405d611f7b304f601fcb483e4fba,Namespace:kube-system,Attempt:0,}" Jul 15 11:37:30.923873 env[1314]: time="2025-07-15T11:37:30.923529473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 15 11:37:30.923962 kubelet[1750]: E0715 11:37:30.923932 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:30.924300 env[1314]: time="2025-07-15T11:37:30.924277986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 15 11:37:31.115335 kubelet[1750]: E0715 11:37:31.115299 1750 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:37:31.399015 kubelet[1750]: I0715 11:37:31.398992 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:37:31.399253 kubelet[1750]: E0715 11:37:31.399222 1750 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 15 11:37:31.547939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250081888.mount: Deactivated successfully. Jul 15 11:37:31.559566 env[1314]: time="2025-07-15T11:37:31.559523675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.560387 env[1314]: time="2025-07-15T11:37:31.560353030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.562736 env[1314]: time="2025-07-15T11:37:31.562693207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.565309 env[1314]: time="2025-07-15T11:37:31.565276399Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.566395 env[1314]: time="2025-07-15T11:37:31.566373396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.567671 env[1314]: time="2025-07-15T11:37:31.567643988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.568774 env[1314]: time="2025-07-15T11:37:31.568754169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.569904 env[1314]: time="2025-07-15T11:37:31.569883025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.571076 env[1314]: time="2025-07-15T11:37:31.571040304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.572281 env[1314]: time="2025-07-15T11:37:31.572237168Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.574554 env[1314]: time="2025-07-15T11:37:31.574251885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.575693 env[1314]: time="2025-07-15T11:37:31.575659684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:31.596638 env[1314]: time="2025-07-15T11:37:31.596588809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:37:31.596638 env[1314]: time="2025-07-15T11:37:31.596622372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:37:31.596638 env[1314]: time="2025-07-15T11:37:31.596632541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:37:31.596774 env[1314]: time="2025-07-15T11:37:31.596744130Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb005a05f01f8e6c8515e86c26e42213a4a226751ffb79e1ead24b4c628ec6e0 pid=1793 runtime=io.containerd.runc.v2 Jul 15 11:37:31.613775 env[1314]: time="2025-07-15T11:37:31.613715625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:37:31.613775 env[1314]: time="2025-07-15T11:37:31.613757223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:37:31.613775 env[1314]: time="2025-07-15T11:37:31.613769376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:37:31.613966 env[1314]: time="2025-07-15T11:37:31.613927282Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4c8ac392a107b1a004ac3b3e6022c3a0a2908f2f5e0f13f50e83fe1af346719 pid=1821 runtime=io.containerd.runc.v2 Jul 15 11:37:31.621742 env[1314]: time="2025-07-15T11:37:31.621665559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:37:31.621742 env[1314]: time="2025-07-15T11:37:31.621707678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:37:31.621742 env[1314]: time="2025-07-15T11:37:31.621719901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:37:31.622685 env[1314]: time="2025-07-15T11:37:31.622206242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9728d47115803c58b0e590e2681cc1f3bce27391fb1a6b1e8589249cf64d942 pid=1848 runtime=io.containerd.runc.v2 Jul 15 11:37:31.660925 env[1314]: time="2025-07-15T11:37:31.659454269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb005a05f01f8e6c8515e86c26e42213a4a226751ffb79e1ead24b4c628ec6e0\"" Jul 15 11:37:31.661052 kubelet[1750]: E0715 11:37:31.660378 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:31.661961 env[1314]: time="2025-07-15T11:37:31.661929339Z" level=info msg="CreateContainer within sandbox \"fb005a05f01f8e6c8515e86c26e42213a4a226751ffb79e1ead24b4c628ec6e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 11:37:31.669811 env[1314]: time="2025-07-15T11:37:31.669772181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4c8ac392a107b1a004ac3b3e6022c3a0a2908f2f5e0f13f50e83fe1af346719\"" Jul 15 11:37:31.670574 kubelet[1750]: E0715 11:37:31.670549 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:31.672017 env[1314]: time="2025-07-15T11:37:31.671995039Z" level=info msg="CreateContainer within sandbox \"e4c8ac392a107b1a004ac3b3e6022c3a0a2908f2f5e0f13f50e83fe1af346719\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 11:37:31.673857 env[1314]: time="2025-07-15T11:37:31.673825420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a5b405d611f7b304f601fcb483e4fba,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9728d47115803c58b0e590e2681cc1f3bce27391fb1a6b1e8589249cf64d942\"" Jul 15 11:37:31.674335 kubelet[1750]: E0715 11:37:31.674317 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:31.675796 env[1314]: time="2025-07-15T11:37:31.675767270Z" level=info msg="CreateContainer within sandbox \"b9728d47115803c58b0e590e2681cc1f3bce27391fb1a6b1e8589249cf64d942\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 11:37:31.677053 env[1314]: time="2025-07-15T11:37:31.677017424Z" level=info msg="CreateContainer within sandbox \"fb005a05f01f8e6c8515e86c26e42213a4a226751ffb79e1ead24b4c628ec6e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f8ed9164c8c9c4c7877842ee1f2cfae1229ec07b0f9654e02a87119e1be638f7\"" Jul 15 11:37:31.677556 env[1314]: time="2025-07-15T11:37:31.677521328Z" level=info msg="StartContainer for \"f8ed9164c8c9c4c7877842ee1f2cfae1229ec07b0f9654e02a87119e1be638f7\"" Jul 15 11:37:31.690402 env[1314]: time="2025-07-15T11:37:31.690359065Z" level=info msg="CreateContainer within sandbox \"e4c8ac392a107b1a004ac3b3e6022c3a0a2908f2f5e0f13f50e83fe1af346719\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9233c2825984efd7f96b3004687e15667d85acd73b874f5542075785326b3458\"" Jul 15 11:37:31.690856 env[1314]: time="2025-07-15T11:37:31.690825539Z" level=info msg="StartContainer for \"9233c2825984efd7f96b3004687e15667d85acd73b874f5542075785326b3458\"" Jul 15 11:37:31.696396 env[1314]: time="2025-07-15T11:37:31.696339496Z" level=info msg="CreateContainer within sandbox \"b9728d47115803c58b0e590e2681cc1f3bce27391fb1a6b1e8589249cf64d942\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf8adb821dfa992ceb64ad3d65c45727d21af18d3b1f24d66d12540d1ad0fd15\"" Jul 15 11:37:31.696801 env[1314]: time="2025-07-15T11:37:31.696779370Z" level=info msg="StartContainer for \"cf8adb821dfa992ceb64ad3d65c45727d21af18d3b1f24d66d12540d1ad0fd15\"" Jul 15 11:37:31.739635 env[1314]: time="2025-07-15T11:37:31.739191067Z" level=info msg="StartContainer for \"f8ed9164c8c9c4c7877842ee1f2cfae1229ec07b0f9654e02a87119e1be638f7\" returns successfully" Jul 15 11:37:31.954120 env[1314]: time="2025-07-15T11:37:31.954018612Z" level=info msg="StartContainer for \"cf8adb821dfa992ceb64ad3d65c45727d21af18d3b1f24d66d12540d1ad0fd15\" returns successfully" Jul 15 11:37:32.031509 env[1314]: time="2025-07-15T11:37:32.031482439Z" level=info msg="StartContainer for \"9233c2825984efd7f96b3004687e15667d85acd73b874f5542075785326b3458\" returns successfully" Jul 15 11:37:32.122649 kubelet[1750]: E0715 11:37:32.122615 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:32.130564 kubelet[1750]: E0715 11:37:32.130549 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:32.131981 kubelet[1750]: E0715 11:37:32.131969 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:32.794958 kubelet[1750]: E0715 11:37:32.794914 1750 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 11:37:33.000698 kubelet[1750]: I0715 11:37:33.000661 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:37:33.079852 kubelet[1750]: I0715 11:37:33.079822 1750 apiserver.go:52] "Watching apiserver" Jul 15 11:37:33.092701 kubelet[1750]: I0715 11:37:33.092662 1750 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:37:33.134019 kubelet[1750]: E0715 11:37:33.134004 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:33.134268 kubelet[1750]: E0715 11:37:33.134227 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:33.175900 kubelet[1750]: I0715 11:37:33.175854 1750 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:37:34.995803 systemd[1]: Reloading. Jul 15 11:37:35.056912 /usr/lib/systemd/system-generators/torcx-generator[2049]: time="2025-07-15T11:37:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:37:35.056940 /usr/lib/systemd/system-generators/torcx-generator[2049]: time="2025-07-15T11:37:35Z" level=info msg="torcx already run" Jul 15 11:37:35.269809 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:37:35.269825 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:37:35.286310 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:37:35.365442 systemd[1]: Stopping kubelet.service... Jul 15 11:37:35.386717 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:37:35.387022 systemd[1]: Stopped kubelet.service. Jul 15 11:37:35.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:35.387989 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 15 11:37:35.388098 kernel: audit: type=1131 audit(1752579455.385:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:35.388636 systemd[1]: Starting kubelet.service... Jul 15 11:37:35.468601 systemd[1]: Started kubelet.service. Jul 15 11:37:35.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:35.473274 kernel: audit: type=1130 audit(1752579455.467:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:35.508694 kubelet[2105]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:37:35.508694 kubelet[2105]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:37:35.508694 kubelet[2105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:37:35.509055 kubelet[2105]: I0715 11:37:35.508782 2105 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:37:35.513607 kubelet[2105]: I0715 11:37:35.513579 2105 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:37:35.513607 kubelet[2105]: I0715 11:37:35.513598 2105 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:37:35.513798 kubelet[2105]: I0715 11:37:35.513776 2105 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:37:35.514801 kubelet[2105]: I0715 11:37:35.514779 2105 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 11:37:35.516589 kubelet[2105]: I0715 11:37:35.516562 2105 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:37:35.520780 kubelet[2105]: E0715 11:37:35.520715 2105 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:37:35.520780 kubelet[2105]: I0715 11:37:35.520740 2105 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:37:35.523746 kubelet[2105]: I0715 11:37:35.523710 2105 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:37:35.524148 kubelet[2105]: I0715 11:37:35.524125 2105 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:37:35.524263 kubelet[2105]: I0715 11:37:35.524224 2105 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:37:35.524432 kubelet[2105]: I0715 11:37:35.524260 2105 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 15 11:37:35.524432 kubelet[2105]: I0715 11:37:35.524428 2105 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:37:35.524536 kubelet[2105]: I0715 11:37:35.524437 2105 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:37:35.524536 kubelet[2105]: I0715 11:37:35.524462 2105 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:37:35.524587 kubelet[2105]: I0715 11:37:35.524539 2105 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:37:35.524587 kubelet[2105]: I0715 11:37:35.524551 2105 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:37:35.524587 kubelet[2105]: I0715 11:37:35.524574 2105 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:37:35.524587 kubelet[2105]: I0715 11:37:35.524588 2105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:37:35.527366 kubelet[2105]: I0715 11:37:35.525528 2105 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:37:35.527366 kubelet[2105]: I0715 11:37:35.525886 2105 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:37:35.527366 kubelet[2105]: I0715 11:37:35.526205 2105 server.go:1274] "Started kubelet" Jul 15 11:37:35.528428 kubelet[2105]: I0715 11:37:35.528402 2105 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 15 11:37:35.528473 kubelet[2105]: I0715 11:37:35.528438 2105 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 15 11:37:35.528473 kubelet[2105]: I0715 11:37:35.528465 2105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:37:35.542888 kernel: audit: type=1400 audit(1752579455.527:211): avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:35.542953 kernel: audit: type=1401 audit(1752579455.527:211): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:37:35.542983 kernel: audit: type=1300 audit(1752579455.527:211): arch=c000003e syscall=188 success=no exit=-22 a0=c000ac7710 a1=c000ac27f8 a2=c000ac76e0 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:35.543005 kernel: audit: type=1327 audit(1752579455.527:211): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:37:35.527000 audit[2105]: AVC avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:35.527000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:37:35.527000 audit[2105]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ac7710 a1=c000ac27f8 a2=c000ac76e0 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:35.527000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:37:35.543745 kubelet[2105]: I0715 11:37:35.543713 2105 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:37:35.545026 kubelet[2105]: I0715 11:37:35.545003 2105 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:37:35.552267 kernel: audit: type=1400 audit(1752579455.527:212): avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:35.552303 kernel: audit: type=1401 audit(1752579455.527:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:37:35.552345 kernel: audit: type=1300 audit(1752579455.527:212): arch=c000003e syscall=188 success=no exit=-22 a0=c00042ee00 a1=c000ac2810 a2=c000ac77a0 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:35.527000 audit[2105]: AVC avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:35.527000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:37:35.527000 audit[2105]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00042ee00 a1=c000ac2810 a2=c000ac77a0 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:35.552457 kubelet[2105]: I0715 11:37:35.546968 2105 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:37:35.552457 kubelet[2105]: I0715 11:37:35.547117 2105 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:37:35.552457 kubelet[2105]: I0715 11:37:35.547735 2105 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:37:35.552457 kubelet[2105]: E0715 11:37:35.550332 2105 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:37:35.552549 kubelet[2105]: I0715 11:37:35.552479 2105 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:37:35.552591 kubelet[2105]: I0715 11:37:35.552576 2105 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:37:35.552712 kubelet[2105]: I0715 11:37:35.552700 2105 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:37:35.554626 kernel: audit: type=1327 audit(1752579455.527:212): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:37:35.527000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:37:35.554689 kubelet[2105]: I0715 11:37:35.553458 2105 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:37:35.554689 kubelet[2105]: I0715 11:37:35.553517 2105 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:37:35.554689 kubelet[2105]: I0715 11:37:35.554617 2105 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:37:35.560028 kubelet[2105]: I0715 11:37:35.559997 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:37:35.563592 kubelet[2105]: I0715 11:37:35.563384 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:37:35.563592 kubelet[2105]: I0715 11:37:35.563405 2105 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:37:35.563592 kubelet[2105]: I0715 11:37:35.563431 2105 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:37:35.563592 kubelet[2105]: E0715 11:37:35.563467 2105 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:37:35.593609 kubelet[2105]: I0715 11:37:35.593586 2105 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:37:35.593609 kubelet[2105]: I0715 11:37:35.593601 2105 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:37:35.593609 kubelet[2105]: I0715 11:37:35.593617 2105 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:37:35.593767 kubelet[2105]: I0715 11:37:35.593737 2105 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 11:37:35.593767 kubelet[2105]: I0715 11:37:35.593746 2105 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 11:37:35.593767 kubelet[2105]: I0715 11:37:35.593761 2105 policy_none.go:49] "None policy: Start" Jul 15 11:37:35.594226 kubelet[2105]: I0715 11:37:35.594206 2105 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:37:35.594314 kubelet[2105]: I0715 11:37:35.594236 2105 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:37:35.594444 kubelet[2105]: I0715 11:37:35.594432 2105 state_mem.go:75] "Updated machine memory state" Jul 15 11:37:35.595495 kubelet[2105]: I0715 11:37:35.595477 2105 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:37:35.594000 audit[2105]: AVC avc: denied { mac_admin } for pid=2105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:37:35.594000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 15 11:37:35.594000 audit[2105]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007f2db0 a1=c0008209c0 a2=c0007f2d80 a3=25 items=0 ppid=1 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:35.594000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 15 11:37:35.595714 kubelet[2105]: I0715 11:37:35.595535 2105 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 15 11:37:35.595714 kubelet[2105]: I0715 11:37:35.595648 2105 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:37:35.595714 kubelet[2105]: I0715 11:37:35.595657 2105 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:37:35.596029 kubelet[2105]: I0715 11:37:35.595991 2105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:37:35.701175 kubelet[2105]: I0715 11:37:35.701146 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:37:35.706537 kubelet[2105]: I0715 11:37:35.706516 2105 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 11:37:35.706598 kubelet[2105]: I0715 11:37:35.706579 2105 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:37:35.753964 kubelet[2105]: I0715 11:37:35.753925 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:37:35.753964 kubelet[2105]: I0715 11:37:35.753957 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a5b405d611f7b304f601fcb483e4fba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a5b405d611f7b304f601fcb483e4fba\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:37:35.753964 kubelet[2105]: I0715 11:37:35.753977 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:35.754179 kubelet[2105]: I0715 11:37:35.753990 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:35.754179 kubelet[2105]: I0715 11:37:35.754007 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:35.754179 kubelet[2105]: I0715 11:37:35.754045 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a5b405d611f7b304f601fcb483e4fba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a5b405d611f7b304f601fcb483e4fba\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:37:35.754179 kubelet[2105]: I0715 11:37:35.754093 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a5b405d611f7b304f601fcb483e4fba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a5b405d611f7b304f601fcb483e4fba\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:37:35.754179 kubelet[2105]: I0715 11:37:35.754132 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:35.754351 kubelet[2105]: I0715 11:37:35.754166 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:37:35.970440 kubelet[2105]: E0715 11:37:35.970411 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:35.971513 kubelet[2105]: E0715 11:37:35.971480 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:35.971581 kubelet[2105]: E0715 11:37:35.971551 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:36.525707 kubelet[2105]: I0715 11:37:36.525665 2105 apiserver.go:52] "Watching apiserver" Jul 15 11:37:36.553418 kubelet[2105]: I0715 11:37:36.553390 2105 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:37:36.574057 kubelet[2105]: E0715 11:37:36.574037 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:36.574366 kubelet[2105]: E0715 11:37:36.574337 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:36.580487 kubelet[2105]: E0715 11:37:36.580454 2105 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 11:37:36.580670 kubelet[2105]: E0715 11:37:36.580648 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:36.606268 kubelet[2105]: I0715 11:37:36.605892 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6058550440000001 podStartE2EDuration="1.605855044s" podCreationTimestamp="2025-07-15 11:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:37:36.605708669 +0000 UTC m=+1.132228215" watchObservedRunningTime="2025-07-15 11:37:36.605855044 +0000 UTC m=+1.132374589" Jul 15 11:37:36.619959 kubelet[2105]: I0715 11:37:36.619913 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.619896617 podStartE2EDuration="1.619896617s" podCreationTimestamp="2025-07-15 11:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:37:36.614151316 +0000 UTC m=+1.140670851" watchObservedRunningTime="2025-07-15 11:37:36.619896617 +0000 UTC m=+1.146416162" Jul 15 11:37:36.626402 kubelet[2105]: I0715 11:37:36.626334 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6263125440000001 podStartE2EDuration="1.626312544s" podCreationTimestamp="2025-07-15 11:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:37:36.620365566 +0000 UTC m=+1.146885101" watchObservedRunningTime="2025-07-15 11:37:36.626312544 +0000 UTC m=+1.152832089" Jul 15 11:37:37.575336 kubelet[2105]: E0715 11:37:37.575294 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:39.889693 kubelet[2105]: I0715 11:37:39.889659 2105 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 11:37:39.890070 env[1314]: time="2025-07-15T11:37:39.889967515Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 11:37:39.890235 kubelet[2105]: I0715 11:37:39.890173 2105 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 11:37:40.486392 kubelet[2105]: I0715 11:37:40.486356 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab7ff294-e005-4392-93bb-41a57629813a-lib-modules\") pod \"kube-proxy-wh6v8\" (UID: \"ab7ff294-e005-4392-93bb-41a57629813a\") " pod="kube-system/kube-proxy-wh6v8" Jul 15 11:37:40.486392 kubelet[2105]: I0715 11:37:40.486392 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ab7ff294-e005-4392-93bb-41a57629813a-kube-proxy\") pod \"kube-proxy-wh6v8\" (UID: \"ab7ff294-e005-4392-93bb-41a57629813a\") " pod="kube-system/kube-proxy-wh6v8" Jul 15 11:37:40.486585 kubelet[2105]: I0715 11:37:40.486456 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab7ff294-e005-4392-93bb-41a57629813a-xtables-lock\") pod \"kube-proxy-wh6v8\" (UID: \"ab7ff294-e005-4392-93bb-41a57629813a\") " pod="kube-system/kube-proxy-wh6v8" Jul 15 11:37:40.486585 kubelet[2105]: I0715 11:37:40.486483 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfkvx\" (UniqueName: \"kubernetes.io/projected/ab7ff294-e005-4392-93bb-41a57629813a-kube-api-access-bfkvx\") pod \"kube-proxy-wh6v8\" (UID: \"ab7ff294-e005-4392-93bb-41a57629813a\") " pod="kube-system/kube-proxy-wh6v8" Jul 15 11:37:40.590934 kubelet[2105]: E0715 11:37:40.590884 2105 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 15 11:37:40.591051 kubelet[2105]: E0715 11:37:40.590955 2105 projected.go:194] Error preparing data for projected volume kube-api-access-bfkvx for pod kube-system/kube-proxy-wh6v8: configmap "kube-root-ca.crt" not found Jul 15 11:37:40.591227 kubelet[2105]: E0715 11:37:40.591197 2105 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab7ff294-e005-4392-93bb-41a57629813a-kube-api-access-bfkvx podName:ab7ff294-e005-4392-93bb-41a57629813a nodeName:}" failed. No retries permitted until 2025-07-15 11:37:41.090996326 +0000 UTC m=+5.617515871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bfkvx" (UniqueName: "kubernetes.io/projected/ab7ff294-e005-4392-93bb-41a57629813a-kube-api-access-bfkvx") pod "kube-proxy-wh6v8" (UID: "ab7ff294-e005-4392-93bb-41a57629813a") : configmap "kube-root-ca.crt" not found Jul 15 11:37:41.091437 kubelet[2105]: I0715 11:37:41.091398 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wksf9\" (UniqueName: \"kubernetes.io/projected/53b08de4-612a-4ade-89e7-77cf21b801ec-kube-api-access-wksf9\") pod \"tigera-operator-5bf8dfcb4-g84nt\" (UID: \"53b08de4-612a-4ade-89e7-77cf21b801ec\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-g84nt" Jul 15 11:37:41.091437 kubelet[2105]: I0715 11:37:41.091430 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53b08de4-612a-4ade-89e7-77cf21b801ec-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-g84nt\" (UID: \"53b08de4-612a-4ade-89e7-77cf21b801ec\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-g84nt" Jul 15 11:37:41.091809 kubelet[2105]: I0715 11:37:41.091682 2105 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 15 11:37:41.311993 env[1314]: time="2025-07-15T11:37:41.311954695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-g84nt,Uid:53b08de4-612a-4ade-89e7-77cf21b801ec,Namespace:tigera-operator,Attempt:0,}" Jul 15 11:37:41.326068 env[1314]: time="2025-07-15T11:37:41.325991266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:37:41.326068 env[1314]: time="2025-07-15T11:37:41.326037285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:37:41.326068 env[1314]: time="2025-07-15T11:37:41.326052064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:37:41.326279 env[1314]: time="2025-07-15T11:37:41.326221751Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/603324513de0dd9a1428d17ca3d7d21505f5f9f58d65538d128f41afd651ebf5 pid=2162 runtime=io.containerd.runc.v2 Jul 15 11:37:41.352895 kubelet[2105]: E0715 11:37:41.352798 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:41.353741 env[1314]: time="2025-07-15T11:37:41.353711347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wh6v8,Uid:ab7ff294-e005-4392-93bb-41a57629813a,Namespace:kube-system,Attempt:0,}" Jul 15 11:37:41.366155 env[1314]: time="2025-07-15T11:37:41.365768156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-g84nt,Uid:53b08de4-612a-4ade-89e7-77cf21b801ec,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"603324513de0dd9a1428d17ca3d7d21505f5f9f58d65538d128f41afd651ebf5\"" Jul 15 11:37:41.367931 env[1314]: time="2025-07-15T11:37:41.367867269Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 11:37:41.368616 env[1314]: time="2025-07-15T11:37:41.368428873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:37:41.368616 env[1314]: time="2025-07-15T11:37:41.368478709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:37:41.368616 env[1314]: time="2025-07-15T11:37:41.368487687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:37:41.368833 env[1314]: time="2025-07-15T11:37:41.368778348Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/353a3378ab218791fef467167a063c24bac0fdbdc53327898a72eef77767edfb pid=2202 runtime=io.containerd.runc.v2 Jul 15 11:37:41.397345 env[1314]: time="2025-07-15T11:37:41.397292532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wh6v8,Uid:ab7ff294-e005-4392-93bb-41a57629813a,Namespace:kube-system,Attempt:0,} returns sandbox id \"353a3378ab218791fef467167a063c24bac0fdbdc53327898a72eef77767edfb\"" Jul 15 11:37:41.397960 kubelet[2105]: E0715 11:37:41.397931 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:41.400699 env[1314]: time="2025-07-15T11:37:41.400664503Z" level=info msg="CreateContainer within sandbox \"353a3378ab218791fef467167a063c24bac0fdbdc53327898a72eef77767edfb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 11:37:41.413914 env[1314]: time="2025-07-15T11:37:41.413866582Z" level=info msg="CreateContainer within sandbox \"353a3378ab218791fef467167a063c24bac0fdbdc53327898a72eef77767edfb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b01da0635f79eac485cd19da1c54723e2c1f4d3d222d0f9de1f1e5d76afbe240\"" Jul 15 11:37:41.414387 env[1314]: time="2025-07-15T11:37:41.414346098Z" level=info msg="StartContainer for \"b01da0635f79eac485cd19da1c54723e2c1f4d3d222d0f9de1f1e5d76afbe240\"" Jul 15 11:37:41.460281 env[1314]: time="2025-07-15T11:37:41.456345981Z" level=info msg="StartContainer for \"b01da0635f79eac485cd19da1c54723e2c1f4d3d222d0f9de1f1e5d76afbe240\" returns successfully" Jul 15 11:37:41.563285 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 15 11:37:41.563403 kernel: audit: type=1325 audit(1752579461.560:214): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.560000 audit[2302]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.560000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfd8e4950 a2=0 a3=7ffdfd8e493c items=0 ppid=2252 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.569081 kernel: audit: type=1300 audit(1752579461.560:214): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfd8e4950 a2=0 a3=7ffdfd8e493c items=0 ppid=2252 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:37:41.571431 kernel: audit: type=1327 audit(1752579461.560:214): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:37:41.571477 kernel: audit: type=1325 audit(1752579461.561:215): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.561000 audit[2303]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.561000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf31e7270 a2=0 a3=7ffcf31e725c items=0 ppid=2252 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.577940 kernel: audit: type=1300 audit(1752579461.561:215): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf31e7270 a2=0 a3=7ffcf31e725c items=0 ppid=2252 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.577988 kernel: audit: type=1327 audit(1752579461.561:215): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:37:41.561000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 15 11:37:41.563000 audit[2304]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.582818 kernel: audit: type=1325 audit(1752579461.563:216): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.582871 kernel: audit: type=1300 audit(1752579461.563:216): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd89eb2ce0 a2=0 a3=7ffd89eb2ccc items=0 ppid=2252 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.563000 audit[2304]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd89eb2ce0 a2=0 a3=7ffd89eb2ccc items=0 ppid=2252 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.582978 kubelet[2105]: E0715 11:37:41.581684 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:41.563000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 15 11:37:41.588910 kernel: audit: type=1327 audit(1752579461.563:216): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 15 11:37:41.589353 kernel: audit: type=1325 audit(1752579461.563:217): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.563000 audit[2305]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.563000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda502dda0 a2=0 a3=7ffda502dd8c items=0 ppid=2252 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.563000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 15 11:37:41.563000 audit[2306]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.563000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1ed1c9c0 a2=0 a3=7fff1ed1c9ac items=0 ppid=2252 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 15 11:37:41.563000 audit[2307]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.563000 audit[2307]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeffb7f000 a2=0 a3=7ffeffb7efec items=0 ppid=2252 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 15 11:37:41.663000 audit[2308]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.663000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd67fac560 a2=0 a3=7ffd67fac54c items=0 ppid=2252 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 15 11:37:41.665000 audit[2310]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.665000 audit[2310]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcb709dc90 a2=0 a3=7ffcb709dc7c items=0 ppid=2252 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 15 11:37:41.668000 audit[2313]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.668000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe547f8590 a2=0 a3=7ffe547f857c items=0 ppid=2252 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 15 11:37:41.669000 audit[2314]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.669000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7a089300 a2=0 a3=7ffe7a0892ec items=0 ppid=2252 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 15 11:37:41.671000 audit[2316]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.671000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc3f79ea0 a2=0 a3=7fffc3f79e8c items=0 ppid=2252 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 15 11:37:41.672000 audit[2317]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.672000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff31f66de0 a2=0 a3=7fff31f66dcc items=0 ppid=2252 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 15 11:37:41.674000 audit[2319]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.674000 audit[2319]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd46e0a220 a2=0 a3=7ffd46e0a20c items=0 ppid=2252 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.674000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 15 11:37:41.676000 audit[2322]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.676000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffccba6ed20 a2=0 a3=7ffccba6ed0c items=0 ppid=2252 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.676000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 15 11:37:41.677000 audit[2323]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.677000 audit[2323]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbd3aa300 a2=0 a3=7ffdbd3aa2ec items=0 ppid=2252 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 15 11:37:41.679000 audit[2325]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.679000 audit[2325]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff321128b0 a2=0 a3=7fff3211289c items=0 ppid=2252 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 15 11:37:41.680000 audit[2326]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.680000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb94c0360 a2=0 a3=7ffdb94c034c items=0 ppid=2252 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.680000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 15 11:37:41.682000 audit[2328]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.682000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5650ff50 a2=0 a3=7fff5650ff3c items=0 ppid=2252 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 15 11:37:41.685000 audit[2331]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.685000 audit[2331]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeab2cd550 a2=0 a3=7ffeab2cd53c items=0 ppid=2252 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 15 11:37:41.688000 audit[2334]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.688000 audit[2334]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5ecb21a0 a2=0 a3=7ffc5ecb218c items=0 ppid=2252 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.688000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 15 11:37:41.689000 audit[2335]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.689000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdcf42f530 a2=0 a3=7ffdcf42f51c items=0 ppid=2252 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 15 11:37:41.691000 audit[2337]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.691000 audit[2337]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffe792dbd0 a2=0 a3=7fffe792dbbc items=0 ppid=2252 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.691000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:37:41.693000 audit[2340]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.693000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd3e858630 a2=0 a3=7ffd3e85861c items=0 ppid=2252 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:37:41.694000 audit[2341]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.694000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc646d99f0 a2=0 a3=7ffc646d99dc items=0 ppid=2252 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 15 11:37:41.696000 audit[2343]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 15 11:37:41.696000 audit[2343]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe21010200 a2=0 a3=7ffe210101ec items=0 ppid=2252 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 15 11:37:41.714000 audit[2349]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:41.714000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcbd764df0 a2=0 a3=7ffcbd764ddc items=0 ppid=2252 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:41.724000 audit[2349]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:41.724000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcbd764df0 a2=0 a3=7ffcbd764ddc items=0 ppid=2252 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:41.725000 audit[2354]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.725000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd92edd580 a2=0 a3=7ffd92edd56c items=0 ppid=2252 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 15 11:37:41.727000 audit[2356]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.727000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffce598c360 a2=0 a3=7ffce598c34c items=0 ppid=2252 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 15 11:37:41.729000 audit[2359]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.729000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffda54710c0 a2=0 a3=7ffda54710ac items=0 ppid=2252 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.729000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 15 11:37:41.730000 audit[2360]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.730000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdf35b8e0 a2=0 a3=7fffdf35b8cc items=0 ppid=2252 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.730000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 15 11:37:41.732000 audit[2362]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.732000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc62ba4740 a2=0 a3=7ffc62ba472c items=0 ppid=2252 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.732000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 15 11:37:41.733000 audit[2363]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.733000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7a997830 a2=0 a3=7ffd7a99781c items=0 ppid=2252 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 15 11:37:41.735000 audit[2365]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.735000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffefa2da3b0 a2=0 a3=7ffefa2da39c items=0 ppid=2252 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.735000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 15 11:37:41.738000 audit[2368]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.738000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdd3a350e0 a2=0 a3=7ffdd3a350cc items=0 ppid=2252 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.738000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 15 11:37:41.739000 audit[2369]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.739000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4bd898d0 a2=0 a3=7ffe4bd898bc items=0 ppid=2252 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.739000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 15 11:37:41.741000 audit[2371]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.741000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffdb80e1a0 a2=0 a3=7fffdb80e18c items=0 ppid=2252 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 15 11:37:41.742000 audit[2372]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.742000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec80be2a0 a2=0 a3=7ffec80be28c items=0 ppid=2252 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.742000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 15 11:37:41.744000 audit[2374]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.744000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefe7ffbd0 a2=0 a3=7ffefe7ffbbc items=0 ppid=2252 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 15 11:37:41.747000 audit[2377]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.747000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4b220c30 a2=0 a3=7ffc4b220c1c items=0 ppid=2252 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 15 11:37:41.750000 audit[2380]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.750000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd374cfca0 a2=0 a3=7ffd374cfc8c items=0 ppid=2252 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.750000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 15 11:37:41.751000 audit[2381]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.751000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff83033650 a2=0 a3=7fff8303363c items=0 ppid=2252 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 15 11:37:41.753000 audit[2383]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.753000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe64088c10 a2=0 a3=7ffe64088bfc items=0 ppid=2252 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:37:41.756000 audit[2386]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.756000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc699265e0 a2=0 a3=7ffc699265cc items=0 ppid=2252 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 15 11:37:41.757000 audit[2387]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.757000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3de124c0 a2=0 a3=7ffe3de124ac items=0 ppid=2252 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 15 11:37:41.759000 audit[2389]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.759000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc6cabf4f0 a2=0 a3=7ffc6cabf4dc items=0 ppid=2252 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.759000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 15 11:37:41.759000 audit[2390]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.759000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8892e6d0 a2=0 a3=7ffe8892e6bc items=0 ppid=2252 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.759000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 15 11:37:41.761000 audit[2392]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.761000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe5e627470 a2=0 a3=7ffe5e62745c items=0 ppid=2252 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:37:41.764000 audit[2395]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 15 11:37:41.764000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdf9579bd0 a2=0 a3=7ffdf9579bbc items=0 ppid=2252 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.764000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 15 11:37:41.766000 audit[2397]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 15 11:37:41.766000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff85a4e790 a2=0 a3=7fff85a4e77c items=0 ppid=2252 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.766000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:41.767000 audit[2397]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 15 11:37:41.767000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff85a4e790 a2=0 a3=7fff85a4e77c items=0 ppid=2252 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:41.767000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:42.639036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470449817.mount: Deactivated successfully. Jul 15 11:37:43.448729 env[1314]: time="2025-07-15T11:37:43.448666010Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:43.450547 env[1314]: time="2025-07-15T11:37:43.450486643Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:43.451825 env[1314]: time="2025-07-15T11:37:43.451794249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:43.453302 env[1314]: time="2025-07-15T11:37:43.453263997Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:43.453731 env[1314]: time="2025-07-15T11:37:43.453705807Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 11:37:43.455641 env[1314]: time="2025-07-15T11:37:43.455604219Z" level=info msg="CreateContainer within sandbox \"603324513de0dd9a1428d17ca3d7d21505f5f9f58d65538d128f41afd651ebf5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 11:37:43.467034 env[1314]: time="2025-07-15T11:37:43.466991068Z" level=info msg="CreateContainer within sandbox \"603324513de0dd9a1428d17ca3d7d21505f5f9f58d65538d128f41afd651ebf5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cf61500b59c6a0035e9889fccea4b030dd195a0ddecea0469568023a82bd424c\"" Jul 15 11:37:43.467462 env[1314]: time="2025-07-15T11:37:43.467409181Z" level=info msg="StartContainer for \"cf61500b59c6a0035e9889fccea4b030dd195a0ddecea0469568023a82bd424c\"" Jul 15 11:37:43.847997 env[1314]: time="2025-07-15T11:37:43.847954070Z" level=info msg="StartContainer for \"cf61500b59c6a0035e9889fccea4b030dd195a0ddecea0469568023a82bd424c\" returns successfully" Jul 15 11:37:44.660703 kubelet[2105]: E0715 11:37:44.660673 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:44.863714 kubelet[2105]: E0715 11:37:44.863672 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:45.164433 kubelet[2105]: I0715 11:37:45.164349 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wh6v8" podStartSLOduration=5.16432717 podStartE2EDuration="5.16432717s" podCreationTimestamp="2025-07-15 11:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:37:41.655887098 +0000 UTC m=+6.182406643" watchObservedRunningTime="2025-07-15 11:37:45.16432717 +0000 UTC m=+9.690846715" Jul 15 11:37:45.177633 kubelet[2105]: I0715 11:37:45.177552 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-g84nt" podStartSLOduration=3.089962303 podStartE2EDuration="5.177530214s" podCreationTimestamp="2025-07-15 11:37:40 +0000 UTC" firstStartedPulling="2025-07-15 11:37:41.366886145 +0000 UTC m=+5.893405690" lastFinishedPulling="2025-07-15 11:37:43.454454056 +0000 UTC m=+7.980973601" observedRunningTime="2025-07-15 11:37:45.166341925 +0000 UTC m=+9.692861470" watchObservedRunningTime="2025-07-15 11:37:45.177530214 +0000 UTC m=+9.704049769" Jul 15 11:37:45.510306 kubelet[2105]: E0715 11:37:45.510199 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:45.545231 kubelet[2105]: E0715 11:37:45.545201 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:45.852509 kubelet[2105]: E0715 11:37:45.852484 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:45.853323 kubelet[2105]: E0715 11:37:45.853294 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:49.270119 sudo[1478]: pam_unix(sudo:session): session closed for user root Jul 15 11:37:49.269000 audit[1478]: USER_END pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:49.274476 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 15 11:37:49.274609 kernel: audit: type=1106 audit(1752579469.269:265): pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:49.274000 audit[1478]: CRED_DISP pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:49.275265 kernel: audit: type=1104 audit(1752579469.274:266): pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 15 11:37:49.279608 sshd[1472]: pam_unix(sshd:session): session closed for user core Jul 15 11:37:49.280000 audit[1472]: USER_END pid=1472 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:49.286400 kernel: audit: type=1106 audit(1752579469.280:267): pid=1472 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:49.285403 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:38296.service: Deactivated successfully. Jul 15 11:37:49.286050 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 11:37:49.280000 audit[1472]: CRED_DISP pid=1472 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:49.286821 systemd-logind[1296]: Session 7 logged out. Waiting for processes to exit. Jul 15 11:37:49.290333 kernel: audit: type=1104 audit(1752579469.280:268): pid=1472 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:37:49.287989 systemd-logind[1296]: Removed session 7. Jul 15 11:37:49.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.133:22-10.0.0.1:38296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:49.294266 kernel: audit: type=1131 audit(1752579469.285:269): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.133:22-10.0.0.1:38296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:37:49.901000 audit[2488]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:49.905273 kernel: audit: type=1325 audit(1752579469.901:270): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:49.901000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd70cdb380 a2=0 a3=7ffd70cdb36c items=0 ppid=2252 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:49.911291 kernel: audit: type=1300 audit(1752579469.901:270): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd70cdb380 a2=0 a3=7ffd70cdb36c items=0 ppid=2252 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:49.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:49.914270 kernel: audit: type=1327 audit(1752579469.901:270): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:49.916000 audit[2488]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:49.919271 kernel: audit: type=1325 audit(1752579469.916:271): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:49.916000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd70cdb380 a2=0 a3=0 items=0 ppid=2252 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:49.924272 kernel: audit: type=1300 audit(1752579469.916:271): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd70cdb380 a2=0 a3=0 items=0 ppid=2252 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:49.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:49.949000 audit[2491]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:49.949000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc966b3ff0 a2=0 a3=7ffc966b3fdc items=0 ppid=2252 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:49.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:49.956000 audit[2491]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:49.956000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc966b3ff0 a2=0 a3=0 items=0 ppid=2252 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:49.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:50.489375 update_engine[1300]: I0715 11:37:50.489321 1300 update_attempter.cc:509] Updating boot flags... Jul 15 11:37:51.637000 audit[2508]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:51.637000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffd8413920 a2=0 a3=7fffd841390c items=0 ppid=2252 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:51.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:51.643000 audit[2508]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:51.643000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd8413920 a2=0 a3=0 items=0 ppid=2252 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:51.643000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:51.864119 kubelet[2105]: I0715 11:37:51.864068 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aa9ec131-003a-495b-91c5-69475efeada5-typha-certs\") pod \"calico-typha-6856b78b9c-vqz59\" (UID: \"aa9ec131-003a-495b-91c5-69475efeada5\") " pod="calico-system/calico-typha-6856b78b9c-vqz59" Jul 15 11:37:51.864119 kubelet[2105]: I0715 11:37:51.864112 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjz48\" (UniqueName: \"kubernetes.io/projected/aa9ec131-003a-495b-91c5-69475efeada5-kube-api-access-kjz48\") pod \"calico-typha-6856b78b9c-vqz59\" (UID: \"aa9ec131-003a-495b-91c5-69475efeada5\") " pod="calico-system/calico-typha-6856b78b9c-vqz59" Jul 15 11:37:51.864551 kubelet[2105]: I0715 11:37:51.864140 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa9ec131-003a-495b-91c5-69475efeada5-tigera-ca-bundle\") pod \"calico-typha-6856b78b9c-vqz59\" (UID: \"aa9ec131-003a-495b-91c5-69475efeada5\") " pod="calico-system/calico-typha-6856b78b9c-vqz59" Jul 15 11:37:51.972000 audit[2512]: NETFILTER_CFG table=filter:95 family=2 entries=20 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:51.972000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff0cbac380 a2=0 a3=7fff0cbac36c items=0 ppid=2252 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:51.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:51.979000 audit[2512]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:51.979000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0cbac380 a2=0 a3=0 items=0 ppid=2252 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:51.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:52.131940 kubelet[2105]: E0715 11:37:52.131895 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:52.132347 env[1314]: time="2025-07-15T11:37:52.132303942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6856b78b9c-vqz59,Uid:aa9ec131-003a-495b-91c5-69475efeada5,Namespace:calico-system,Attempt:0,}" Jul 15 11:37:52.148231 env[1314]: time="2025-07-15T11:37:52.148162197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:37:52.148231 env[1314]: time="2025-07-15T11:37:52.148203006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:37:52.148231 env[1314]: time="2025-07-15T11:37:52.148213245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:37:52.148620 env[1314]: time="2025-07-15T11:37:52.148541970Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/920895a8c546ab9420ab6315065275b15b11b6389f41e959bdf93fc17270392b pid=2520 runtime=io.containerd.runc.v2 Jul 15 11:37:52.240057 env[1314]: time="2025-07-15T11:37:52.239946944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6856b78b9c-vqz59,Uid:aa9ec131-003a-495b-91c5-69475efeada5,Namespace:calico-system,Attempt:0,} returns sandbox id \"920895a8c546ab9420ab6315065275b15b11b6389f41e959bdf93fc17270392b\"" Jul 15 11:37:52.240790 kubelet[2105]: E0715 11:37:52.240768 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:52.241824 env[1314]: time="2025-07-15T11:37:52.241806572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 11:37:52.265808 kubelet[2105]: I0715 11:37:52.265766 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a2e84812-a030-402b-b914-38421746ca62-node-certs\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.265808 kubelet[2105]: I0715 11:37:52.265808 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2e84812-a030-402b-b914-38421746ca62-lib-modules\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.265917 kubelet[2105]: I0715 11:37:52.265826 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a2e84812-a030-402b-b914-38421746ca62-cni-net-dir\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.265917 kubelet[2105]: I0715 11:37:52.265840 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a2e84812-a030-402b-b914-38421746ca62-policysync\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.265917 kubelet[2105]: I0715 11:37:52.265853 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a2e84812-a030-402b-b914-38421746ca62-cni-log-dir\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.265917 kubelet[2105]: I0715 11:37:52.265865 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2e84812-a030-402b-b914-38421746ca62-xtables-lock\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.265917 kubelet[2105]: I0715 11:37:52.265882 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2e84812-a030-402b-b914-38421746ca62-tigera-ca-bundle\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.266032 kubelet[2105]: I0715 11:37:52.265896 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj82b\" (UniqueName: \"kubernetes.io/projected/a2e84812-a030-402b-b914-38421746ca62-kube-api-access-vj82b\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.266032 kubelet[2105]: I0715 11:37:52.265918 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a2e84812-a030-402b-b914-38421746ca62-cni-bin-dir\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.266032 kubelet[2105]: I0715 11:37:52.265934 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a2e84812-a030-402b-b914-38421746ca62-var-run-calico\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.266032 kubelet[2105]: I0715 11:37:52.265949 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a2e84812-a030-402b-b914-38421746ca62-flexvol-driver-host\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.266032 kubelet[2105]: I0715 11:37:52.265962 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2e84812-a030-402b-b914-38421746ca62-var-lib-calico\") pod \"calico-node-kl4nd\" (UID: \"a2e84812-a030-402b-b914-38421746ca62\") " pod="calico-system/calico-node-kl4nd" Jul 15 11:37:52.368181 kubelet[2105]: E0715 11:37:52.368149 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.368181 kubelet[2105]: W0715 11:37:52.368171 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.368383 kubelet[2105]: E0715 11:37:52.368201 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.369846 kubelet[2105]: E0715 11:37:52.369818 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.369846 kubelet[2105]: W0715 11:37:52.369838 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.370014 kubelet[2105]: E0715 11:37:52.369860 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.373032 kubelet[2105]: E0715 11:37:52.373015 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.373032 kubelet[2105]: W0715 11:37:52.373026 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.373109 kubelet[2105]: E0715 11:37:52.373037 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.418893 kubelet[2105]: E0715 11:37:52.418827 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgg9" podUID="9513186e-84fa-49d1-893d-fcd495764a33" Jul 15 11:37:52.455411 kubelet[2105]: E0715 11:37:52.455378 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.455411 kubelet[2105]: W0715 11:37:52.455401 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.455587 kubelet[2105]: E0715 11:37:52.455421 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.455655 kubelet[2105]: E0715 11:37:52.455625 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.455655 kubelet[2105]: W0715 11:37:52.455645 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.455824 kubelet[2105]: E0715 11:37:52.455666 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.455958 kubelet[2105]: E0715 11:37:52.455925 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.455958 kubelet[2105]: W0715 11:37:52.455943 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.455958 kubelet[2105]: E0715 11:37:52.455966 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.456172 kubelet[2105]: E0715 11:37:52.456121 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.456172 kubelet[2105]: W0715 11:37:52.456127 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.456172 kubelet[2105]: E0715 11:37:52.456134 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.456288 kubelet[2105]: E0715 11:37:52.456264 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.456288 kubelet[2105]: W0715 11:37:52.456277 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.456288 kubelet[2105]: E0715 11:37:52.456289 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.456452 kubelet[2105]: E0715 11:37:52.456432 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.456452 kubelet[2105]: W0715 11:37:52.456448 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.456556 kubelet[2105]: E0715 11:37:52.456459 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.456651 kubelet[2105]: E0715 11:37:52.456636 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.456651 kubelet[2105]: W0715 11:37:52.456646 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.456651 kubelet[2105]: E0715 11:37:52.456653 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.456784 kubelet[2105]: E0715 11:37:52.456759 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.456784 kubelet[2105]: W0715 11:37:52.456775 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.456784 kubelet[2105]: E0715 11:37:52.456782 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.456921 kubelet[2105]: E0715 11:37:52.456906 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.456921 kubelet[2105]: W0715 11:37:52.456915 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.456921 kubelet[2105]: E0715 11:37:52.456923 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.457032 kubelet[2105]: E0715 11:37:52.457019 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.457032 kubelet[2105]: W0715 11:37:52.457025 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.457032 kubelet[2105]: E0715 11:37:52.457031 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.457146 kubelet[2105]: E0715 11:37:52.457128 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.457146 kubelet[2105]: W0715 11:37:52.457141 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.457232 kubelet[2105]: E0715 11:37:52.457149 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.457322 kubelet[2105]: E0715 11:37:52.457305 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.457322 kubelet[2105]: W0715 11:37:52.457317 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.457322 kubelet[2105]: E0715 11:37:52.457324 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.457529 kubelet[2105]: E0715 11:37:52.457497 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.457529 kubelet[2105]: W0715 11:37:52.457519 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.457529 kubelet[2105]: E0715 11:37:52.457527 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.457675 kubelet[2105]: E0715 11:37:52.457650 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.457675 kubelet[2105]: W0715 11:37:52.457666 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.457675 kubelet[2105]: E0715 11:37:52.457673 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.457778 kubelet[2105]: E0715 11:37:52.457767 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.457778 kubelet[2105]: W0715 11:37:52.457774 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.457832 kubelet[2105]: E0715 11:37:52.457780 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.457910 kubelet[2105]: E0715 11:37:52.457894 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.457910 kubelet[2105]: W0715 11:37:52.457904 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.457910 kubelet[2105]: E0715 11:37:52.457910 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.458136 kubelet[2105]: E0715 11:37:52.458101 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.458199 kubelet[2105]: W0715 11:37:52.458141 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.458199 kubelet[2105]: E0715 11:37:52.458164 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.458454 kubelet[2105]: E0715 11:37:52.458418 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.458454 kubelet[2105]: W0715 11:37:52.458430 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.458454 kubelet[2105]: E0715 11:37:52.458440 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.458655 kubelet[2105]: E0715 11:37:52.458600 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.458655 kubelet[2105]: W0715 11:37:52.458607 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.458655 kubelet[2105]: E0715 11:37:52.458615 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.458794 kubelet[2105]: E0715 11:37:52.458727 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.458794 kubelet[2105]: W0715 11:37:52.458748 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.458794 kubelet[2105]: E0715 11:37:52.458755 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.468131 kubelet[2105]: E0715 11:37:52.468107 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.468131 kubelet[2105]: W0715 11:37:52.468122 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.468131 kubelet[2105]: E0715 11:37:52.468134 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.468327 kubelet[2105]: I0715 11:37:52.468160 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjppr\" (UniqueName: \"kubernetes.io/projected/9513186e-84fa-49d1-893d-fcd495764a33-kube-api-access-mjppr\") pod \"csi-node-driver-swgg9\" (UID: \"9513186e-84fa-49d1-893d-fcd495764a33\") " pod="calico-system/csi-node-driver-swgg9" Jul 15 11:37:52.468403 kubelet[2105]: E0715 11:37:52.468378 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.468403 kubelet[2105]: W0715 11:37:52.468393 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.468403 kubelet[2105]: E0715 11:37:52.468402 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.468532 kubelet[2105]: I0715 11:37:52.468414 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9513186e-84fa-49d1-893d-fcd495764a33-socket-dir\") pod \"csi-node-driver-swgg9\" (UID: \"9513186e-84fa-49d1-893d-fcd495764a33\") " pod="calico-system/csi-node-driver-swgg9" Jul 15 11:37:52.468628 kubelet[2105]: E0715 11:37:52.468606 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.468628 kubelet[2105]: W0715 11:37:52.468616 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.468628 kubelet[2105]: E0715 11:37:52.468628 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.468767 kubelet[2105]: I0715 11:37:52.468641 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9513186e-84fa-49d1-893d-fcd495764a33-varrun\") pod \"csi-node-driver-swgg9\" (UID: \"9513186e-84fa-49d1-893d-fcd495764a33\") " pod="calico-system/csi-node-driver-swgg9" Jul 15 11:37:52.468862 kubelet[2105]: E0715 11:37:52.468843 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.468862 kubelet[2105]: W0715 11:37:52.468858 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.468956 kubelet[2105]: E0715 11:37:52.468869 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.469015 kubelet[2105]: E0715 11:37:52.469000 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.469015 kubelet[2105]: W0715 11:37:52.469008 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.469015 kubelet[2105]: E0715 11:37:52.469015 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.469189 kubelet[2105]: E0715 11:37:52.469169 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.469189 kubelet[2105]: W0715 11:37:52.469180 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.469189 kubelet[2105]: E0715 11:37:52.469191 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.469356 kubelet[2105]: E0715 11:37:52.469341 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.469356 kubelet[2105]: W0715 11:37:52.469350 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.469443 kubelet[2105]: E0715 11:37:52.469360 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.469490 kubelet[2105]: E0715 11:37:52.469469 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.469490 kubelet[2105]: W0715 11:37:52.469484 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.469490 kubelet[2105]: E0715 11:37:52.469492 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.469603 kubelet[2105]: I0715 11:37:52.469515 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9513186e-84fa-49d1-893d-fcd495764a33-kubelet-dir\") pod \"csi-node-driver-swgg9\" (UID: \"9513186e-84fa-49d1-893d-fcd495764a33\") " pod="calico-system/csi-node-driver-swgg9" Jul 15 11:37:52.469660 kubelet[2105]: E0715 11:37:52.469643 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.469660 kubelet[2105]: W0715 11:37:52.469654 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.469660 kubelet[2105]: E0715 11:37:52.469661 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.469767 kubelet[2105]: I0715 11:37:52.469674 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9513186e-84fa-49d1-893d-fcd495764a33-registration-dir\") pod \"csi-node-driver-swgg9\" (UID: \"9513186e-84fa-49d1-893d-fcd495764a33\") " pod="calico-system/csi-node-driver-swgg9" Jul 15 11:37:52.469827 kubelet[2105]: E0715 11:37:52.469799 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.469827 kubelet[2105]: W0715 11:37:52.469806 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.469827 kubelet[2105]: E0715 11:37:52.469815 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.469952 kubelet[2105]: E0715 11:37:52.469930 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.469952 kubelet[2105]: W0715 11:37:52.469943 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.470036 kubelet[2105]: E0715 11:37:52.469999 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.470093 kubelet[2105]: E0715 11:37:52.470076 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.470093 kubelet[2105]: W0715 11:37:52.470087 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.470173 kubelet[2105]: E0715 11:37:52.470146 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.470343 kubelet[2105]: E0715 11:37:52.470302 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.470343 kubelet[2105]: W0715 11:37:52.470317 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.470343 kubelet[2105]: E0715 11:37:52.470339 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.470515 kubelet[2105]: E0715 11:37:52.470484 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.470515 kubelet[2105]: W0715 11:37:52.470506 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.470515 kubelet[2105]: E0715 11:37:52.470516 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.470676 kubelet[2105]: E0715 11:37:52.470661 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.470676 kubelet[2105]: W0715 11:37:52.470671 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.470676 kubelet[2105]: E0715 11:37:52.470678 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.482300 env[1314]: time="2025-07-15T11:37:52.482264541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kl4nd,Uid:a2e84812-a030-402b-b914-38421746ca62,Namespace:calico-system,Attempt:0,}" Jul 15 11:37:52.497932 env[1314]: time="2025-07-15T11:37:52.497811365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:37:52.497932 env[1314]: time="2025-07-15T11:37:52.497848465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:37:52.497932 env[1314]: time="2025-07-15T11:37:52.497858314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:37:52.498105 env[1314]: time="2025-07-15T11:37:52.497985786Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e75f63cbe298afb24493aea77b5385ce3525b6e6050e772c92f488d2ce078a91 pid=2611 runtime=io.containerd.runc.v2 Jul 15 11:37:52.546691 env[1314]: time="2025-07-15T11:37:52.546629307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kl4nd,Uid:a2e84812-a030-402b-b914-38421746ca62,Namespace:calico-system,Attempt:0,} returns sandbox id \"e75f63cbe298afb24493aea77b5385ce3525b6e6050e772c92f488d2ce078a91\"" Jul 15 11:37:52.570662 kubelet[2105]: E0715 11:37:52.570634 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.570662 kubelet[2105]: W0715 11:37:52.570652 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.570662 kubelet[2105]: E0715 11:37:52.570667 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.570955 kubelet[2105]: E0715 11:37:52.570875 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.570955 kubelet[2105]: W0715 11:37:52.570886 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.570955 kubelet[2105]: E0715 11:37:52.570901 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.571167 kubelet[2105]: E0715 11:37:52.571145 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.571203 kubelet[2105]: W0715 11:37:52.571168 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.571203 kubelet[2105]: E0715 11:37:52.571194 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.571411 kubelet[2105]: E0715 11:37:52.571389 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.571411 kubelet[2105]: W0715 11:37:52.571400 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.571462 kubelet[2105]: E0715 11:37:52.571412 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.571599 kubelet[2105]: E0715 11:37:52.571580 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.571599 kubelet[2105]: W0715 11:37:52.571590 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.571649 kubelet[2105]: E0715 11:37:52.571601 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.571777 kubelet[2105]: E0715 11:37:52.571763 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.571777 kubelet[2105]: W0715 11:37:52.571775 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.571824 kubelet[2105]: E0715 11:37:52.571788 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.571994 kubelet[2105]: E0715 11:37:52.571973 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.571994 kubelet[2105]: W0715 11:37:52.571986 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.572041 kubelet[2105]: E0715 11:37:52.571996 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.572162 kubelet[2105]: E0715 11:37:52.572150 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.572162 kubelet[2105]: W0715 11:37:52.572160 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.572234 kubelet[2105]: E0715 11:37:52.572221 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.572326 kubelet[2105]: E0715 11:37:52.572315 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.572326 kubelet[2105]: W0715 11:37:52.572324 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.572407 kubelet[2105]: E0715 11:37:52.572393 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.572493 kubelet[2105]: E0715 11:37:52.572481 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.572493 kubelet[2105]: W0715 11:37:52.572491 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.572549 kubelet[2105]: E0715 11:37:52.572513 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.572764 kubelet[2105]: E0715 11:37:52.572745 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.572764 kubelet[2105]: W0715 11:37:52.572758 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.572857 kubelet[2105]: E0715 11:37:52.572801 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.573015 kubelet[2105]: E0715 11:37:52.573000 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.573015 kubelet[2105]: W0715 11:37:52.573011 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.573093 kubelet[2105]: E0715 11:37:52.573051 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.573279 kubelet[2105]: E0715 11:37:52.573264 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.573279 kubelet[2105]: W0715 11:37:52.573275 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.573358 kubelet[2105]: E0715 11:37:52.573288 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.573595 kubelet[2105]: E0715 11:37:52.573561 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.573595 kubelet[2105]: W0715 11:37:52.573578 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.573664 kubelet[2105]: E0715 11:37:52.573603 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.573794 kubelet[2105]: E0715 11:37:52.573777 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.573838 kubelet[2105]: W0715 11:37:52.573813 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.573838 kubelet[2105]: E0715 11:37:52.573840 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.574053 kubelet[2105]: E0715 11:37:52.574035 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.574053 kubelet[2105]: W0715 11:37:52.574048 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.574125 kubelet[2105]: E0715 11:37:52.574070 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.574295 kubelet[2105]: E0715 11:37:52.574279 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.574295 kubelet[2105]: W0715 11:37:52.574293 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.574343 kubelet[2105]: E0715 11:37:52.574315 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.574542 kubelet[2105]: E0715 11:37:52.574494 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.574542 kubelet[2105]: W0715 11:37:52.574541 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.574613 kubelet[2105]: E0715 11:37:52.574558 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.574758 kubelet[2105]: E0715 11:37:52.574742 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.574758 kubelet[2105]: W0715 11:37:52.574754 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.574810 kubelet[2105]: E0715 11:37:52.574765 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.574937 kubelet[2105]: E0715 11:37:52.574922 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.574937 kubelet[2105]: W0715 11:37:52.574933 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.575001 kubelet[2105]: E0715 11:37:52.574942 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.575167 kubelet[2105]: E0715 11:37:52.575153 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.575167 kubelet[2105]: W0715 11:37:52.575163 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.575213 kubelet[2105]: E0715 11:37:52.575173 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.575374 kubelet[2105]: E0715 11:37:52.575356 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.575374 kubelet[2105]: W0715 11:37:52.575369 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.575439 kubelet[2105]: E0715 11:37:52.575380 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.575669 kubelet[2105]: E0715 11:37:52.575653 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.575669 kubelet[2105]: W0715 11:37:52.575665 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.575715 kubelet[2105]: E0715 11:37:52.575675 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.576571 kubelet[2105]: E0715 11:37:52.576553 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.576571 kubelet[2105]: W0715 11:37:52.576568 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.576638 kubelet[2105]: E0715 11:37:52.576584 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.576793 kubelet[2105]: E0715 11:37:52.576776 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.576839 kubelet[2105]: W0715 11:37:52.576815 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.576839 kubelet[2105]: E0715 11:37:52.576829 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.579398 kubelet[2105]: E0715 11:37:52.579374 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:52.579398 kubelet[2105]: W0715 11:37:52.579387 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:52.579398 kubelet[2105]: E0715 11:37:52.579395 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:52.987000 audit[2672]: NETFILTER_CFG table=filter:97 family=2 entries=22 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:52.987000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcc051c820 a2=0 a3=7ffcc051c80c items=0 ppid=2252 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:52.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:52.991000 audit[2672]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:37:52.991000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc051c820 a2=0 a3=0 items=0 ppid=2252 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:37:52.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:37:53.449456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797644111.mount: Deactivated successfully. Jul 15 11:37:53.563991 kubelet[2105]: E0715 11:37:53.563938 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgg9" podUID="9513186e-84fa-49d1-893d-fcd495764a33" Jul 15 11:37:54.802284 env[1314]: time="2025-07-15T11:37:54.802230769Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:54.804381 env[1314]: time="2025-07-15T11:37:54.804331748Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:54.805963 env[1314]: time="2025-07-15T11:37:54.805930635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:54.807430 env[1314]: time="2025-07-15T11:37:54.807399634Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:54.807811 env[1314]: time="2025-07-15T11:37:54.807778784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 11:37:54.809631 env[1314]: time="2025-07-15T11:37:54.808703160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 11:37:54.818177 env[1314]: time="2025-07-15T11:37:54.818136913Z" level=info msg="CreateContainer within sandbox \"920895a8c546ab9420ab6315065275b15b11b6389f41e959bdf93fc17270392b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 11:37:54.830998 env[1314]: time="2025-07-15T11:37:54.830954051Z" level=info msg="CreateContainer within sandbox \"920895a8c546ab9420ab6315065275b15b11b6389f41e959bdf93fc17270392b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"feec2f93c319d16e1ecc31e569a9787f6c1a6c7214cc2f2eb57deae2e77400cd\"" Jul 15 11:37:54.832710 env[1314]: time="2025-07-15T11:37:54.832177695Z" level=info msg="StartContainer for \"feec2f93c319d16e1ecc31e569a9787f6c1a6c7214cc2f2eb57deae2e77400cd\"" Jul 15 11:37:54.882644 env[1314]: time="2025-07-15T11:37:54.882600342Z" level=info msg="StartContainer for \"feec2f93c319d16e1ecc31e569a9787f6c1a6c7214cc2f2eb57deae2e77400cd\" returns successfully" Jul 15 11:37:55.565766 kubelet[2105]: E0715 11:37:55.565717 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgg9" podUID="9513186e-84fa-49d1-893d-fcd495764a33" Jul 15 11:37:55.869478 kubelet[2105]: E0715 11:37:55.869379 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:55.880932 kubelet[2105]: E0715 11:37:55.880909 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.880932 kubelet[2105]: W0715 11:37:55.880926 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.880932 kubelet[2105]: E0715 11:37:55.880945 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.881130 kubelet[2105]: E0715 11:37:55.881117 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.881130 kubelet[2105]: W0715 11:37:55.881127 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.881186 kubelet[2105]: E0715 11:37:55.881135 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.881304 kubelet[2105]: E0715 11:37:55.881289 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.881304 kubelet[2105]: W0715 11:37:55.881299 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.881304 kubelet[2105]: E0715 11:37:55.881306 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.881465 kubelet[2105]: E0715 11:37:55.881442 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.881465 kubelet[2105]: W0715 11:37:55.881452 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.881465 kubelet[2105]: E0715 11:37:55.881460 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.881675 kubelet[2105]: E0715 11:37:55.881607 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.881675 kubelet[2105]: W0715 11:37:55.881614 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.881675 kubelet[2105]: E0715 11:37:55.881621 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.881760 kubelet[2105]: E0715 11:37:55.881747 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.881760 kubelet[2105]: W0715 11:37:55.881756 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.881812 kubelet[2105]: E0715 11:37:55.881763 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.881903 kubelet[2105]: E0715 11:37:55.881892 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.881903 kubelet[2105]: W0715 11:37:55.881900 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.881954 kubelet[2105]: E0715 11:37:55.881907 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.882051 kubelet[2105]: E0715 11:37:55.882040 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.882051 kubelet[2105]: W0715 11:37:55.882049 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.882102 kubelet[2105]: E0715 11:37:55.882057 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.882204 kubelet[2105]: E0715 11:37:55.882193 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.882204 kubelet[2105]: W0715 11:37:55.882201 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.882282 kubelet[2105]: E0715 11:37:55.882208 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.882373 kubelet[2105]: E0715 11:37:55.882352 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.882373 kubelet[2105]: W0715 11:37:55.882361 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.882373 kubelet[2105]: E0715 11:37:55.882369 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.882531 kubelet[2105]: E0715 11:37:55.882519 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.882531 kubelet[2105]: W0715 11:37:55.882528 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.882579 kubelet[2105]: E0715 11:37:55.882535 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.882676 kubelet[2105]: E0715 11:37:55.882664 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.882676 kubelet[2105]: W0715 11:37:55.882674 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.882726 kubelet[2105]: E0715 11:37:55.882681 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.882823 kubelet[2105]: E0715 11:37:55.882812 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.882823 kubelet[2105]: W0715 11:37:55.882820 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.882872 kubelet[2105]: E0715 11:37:55.882829 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.882974 kubelet[2105]: E0715 11:37:55.882962 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.882974 kubelet[2105]: W0715 11:37:55.882971 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.883023 kubelet[2105]: E0715 11:37:55.882978 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.883117 kubelet[2105]: E0715 11:37:55.883106 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.883117 kubelet[2105]: W0715 11:37:55.883114 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.883165 kubelet[2105]: E0715 11:37:55.883121 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.895587 kubelet[2105]: E0715 11:37:55.895551 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.895587 kubelet[2105]: W0715 11:37:55.895571 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.895587 kubelet[2105]: E0715 11:37:55.895588 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.895772 kubelet[2105]: E0715 11:37:55.895758 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.895772 kubelet[2105]: W0715 11:37:55.895768 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.895823 kubelet[2105]: E0715 11:37:55.895781 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.896000 kubelet[2105]: E0715 11:37:55.895985 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.896000 kubelet[2105]: W0715 11:37:55.895996 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.896071 kubelet[2105]: E0715 11:37:55.896009 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.896208 kubelet[2105]: E0715 11:37:55.896185 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.896208 kubelet[2105]: W0715 11:37:55.896196 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.896208 kubelet[2105]: E0715 11:37:55.896208 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.896432 kubelet[2105]: E0715 11:37:55.896407 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.896432 kubelet[2105]: W0715 11:37:55.896429 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.896539 kubelet[2105]: E0715 11:37:55.896454 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.896635 kubelet[2105]: E0715 11:37:55.896619 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.896635 kubelet[2105]: W0715 11:37:55.896629 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.896705 kubelet[2105]: E0715 11:37:55.896641 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.896796 kubelet[2105]: E0715 11:37:55.896781 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.896796 kubelet[2105]: W0715 11:37:55.896791 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.896846 kubelet[2105]: E0715 11:37:55.896805 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.896951 kubelet[2105]: E0715 11:37:55.896936 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.896951 kubelet[2105]: W0715 11:37:55.896945 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.897019 kubelet[2105]: E0715 11:37:55.896957 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.897132 kubelet[2105]: E0715 11:37:55.897118 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.897132 kubelet[2105]: W0715 11:37:55.897127 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.897204 kubelet[2105]: E0715 11:37:55.897138 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.897325 kubelet[2105]: E0715 11:37:55.897308 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.897325 kubelet[2105]: W0715 11:37:55.897320 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.897399 kubelet[2105]: E0715 11:37:55.897334 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.897505 kubelet[2105]: E0715 11:37:55.897482 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.897505 kubelet[2105]: W0715 11:37:55.897500 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.897559 kubelet[2105]: E0715 11:37:55.897513 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.897692 kubelet[2105]: E0715 11:37:55.897666 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.897692 kubelet[2105]: W0715 11:37:55.897682 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.897757 kubelet[2105]: E0715 11:37:55.897697 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.897853 kubelet[2105]: E0715 11:37:55.897839 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.897853 kubelet[2105]: W0715 11:37:55.897847 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.897926 kubelet[2105]: E0715 11:37:55.897859 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.898062 kubelet[2105]: E0715 11:37:55.898047 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.898090 kubelet[2105]: W0715 11:37:55.898069 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.898090 kubelet[2105]: E0715 11:37:55.898082 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.898334 kubelet[2105]: E0715 11:37:55.898309 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.898334 kubelet[2105]: W0715 11:37:55.898321 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.898334 kubelet[2105]: E0715 11:37:55.898334 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.898512 kubelet[2105]: E0715 11:37:55.898496 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.898512 kubelet[2105]: W0715 11:37:55.898509 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.898573 kubelet[2105]: E0715 11:37:55.898523 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.898694 kubelet[2105]: E0715 11:37:55.898679 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.898694 kubelet[2105]: W0715 11:37:55.898688 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.898694 kubelet[2105]: E0715 11:37:55.898696 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:55.898926 kubelet[2105]: E0715 11:37:55.898911 2105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 11:37:55.898926 kubelet[2105]: W0715 11:37:55.898921 2105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 11:37:55.898996 kubelet[2105]: E0715 11:37:55.898930 2105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 11:37:56.059329 env[1314]: time="2025-07-15T11:37:56.059278027Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:56.061451 env[1314]: time="2025-07-15T11:37:56.061415891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:56.063112 env[1314]: time="2025-07-15T11:37:56.063065939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:56.064633 env[1314]: time="2025-07-15T11:37:56.064590340Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:37:56.065070 env[1314]: time="2025-07-15T11:37:56.065033940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 11:37:56.067058 env[1314]: time="2025-07-15T11:37:56.067024505Z" level=info msg="CreateContainer within sandbox \"e75f63cbe298afb24493aea77b5385ce3525b6e6050e772c92f488d2ce078a91\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 11:37:56.079011 env[1314]: time="2025-07-15T11:37:56.078972448Z" level=info msg="CreateContainer within sandbox \"e75f63cbe298afb24493aea77b5385ce3525b6e6050e772c92f488d2ce078a91\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8e7975397dcb83e878dafbcde44348d7b8ebe879f550f8d1a46e514c566fad1d\"" Jul 15 11:37:56.079380 env[1314]: time="2025-07-15T11:37:56.079346096Z" level=info msg="StartContainer for \"8e7975397dcb83e878dafbcde44348d7b8ebe879f550f8d1a46e514c566fad1d\"" Jul 15 11:37:56.130891 env[1314]: time="2025-07-15T11:37:56.130792797Z" level=info msg="StartContainer for \"8e7975397dcb83e878dafbcde44348d7b8ebe879f550f8d1a46e514c566fad1d\" returns successfully" Jul 15 11:37:56.436936 env[1314]: time="2025-07-15T11:37:56.436812799Z" level=info msg="shim disconnected" id=8e7975397dcb83e878dafbcde44348d7b8ebe879f550f8d1a46e514c566fad1d Jul 15 11:37:56.436936 env[1314]: time="2025-07-15T11:37:56.436858065Z" level=warning msg="cleaning up after shim disconnected" id=8e7975397dcb83e878dafbcde44348d7b8ebe879f550f8d1a46e514c566fad1d namespace=k8s.io Jul 15 11:37:56.436936 env[1314]: time="2025-07-15T11:37:56.436867723Z" level=info msg="cleaning up dead shim" Jul 15 11:37:56.442827 env[1314]: time="2025-07-15T11:37:56.442790241Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:37:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2799 runtime=io.containerd.runc.v2\n" Jul 15 11:37:56.814677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e7975397dcb83e878dafbcde44348d7b8ebe879f550f8d1a46e514c566fad1d-rootfs.mount: Deactivated successfully. Jul 15 11:37:56.873182 kubelet[2105]: I0715 11:37:56.872827 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:37:56.873182 kubelet[2105]: E0715 11:37:56.873129 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:37:56.873994 env[1314]: time="2025-07-15T11:37:56.873961505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 11:37:56.885333 kubelet[2105]: I0715 11:37:56.885273 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6856b78b9c-vqz59" podStartSLOduration=3.318319195 podStartE2EDuration="5.885260158s" podCreationTimestamp="2025-07-15 11:37:51 +0000 UTC" firstStartedPulling="2025-07-15 11:37:52.241587685 +0000 UTC m=+16.768107230" lastFinishedPulling="2025-07-15 11:37:54.808528648 +0000 UTC m=+19.335048193" observedRunningTime="2025-07-15 11:37:55.877443089 +0000 UTC m=+20.403962624" watchObservedRunningTime="2025-07-15 11:37:56.885260158 +0000 UTC m=+21.411779723" Jul 15 11:37:57.563962 kubelet[2105]: E0715 11:37:57.563908 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgg9" podUID="9513186e-84fa-49d1-893d-fcd495764a33" Jul 15 11:37:59.565132 kubelet[2105]: E0715 11:37:59.565085 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgg9" podUID="9513186e-84fa-49d1-893d-fcd495764a33" Jul 15 11:38:00.312047 env[1314]: time="2025-07-15T11:38:00.311988948Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:00.313720 env[1314]: time="2025-07-15T11:38:00.313690606Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:00.315328 env[1314]: time="2025-07-15T11:38:00.315287816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:00.316658 env[1314]: time="2025-07-15T11:38:00.316624284Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:00.317092 env[1314]: time="2025-07-15T11:38:00.317059687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 11:38:00.318897 env[1314]: time="2025-07-15T11:38:00.318855283Z" level=info msg="CreateContainer within sandbox \"e75f63cbe298afb24493aea77b5385ce3525b6e6050e772c92f488d2ce078a91\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 11:38:00.330466 env[1314]: time="2025-07-15T11:38:00.330419172Z" level=info msg="CreateContainer within sandbox \"e75f63cbe298afb24493aea77b5385ce3525b6e6050e772c92f488d2ce078a91\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"96c55eb08160e219a4b69038bf0a85ed8b86cbf778db8d73f7cd1d3ded912338\"" Jul 15 11:38:00.330860 env[1314]: time="2025-07-15T11:38:00.330823215Z" level=info msg="StartContainer for \"96c55eb08160e219a4b69038bf0a85ed8b86cbf778db8d73f7cd1d3ded912338\"" Jul 15 11:38:00.373154 env[1314]: time="2025-07-15T11:38:00.373103112Z" level=info msg="StartContainer for \"96c55eb08160e219a4b69038bf0a85ed8b86cbf778db8d73f7cd1d3ded912338\" returns successfully" Jul 15 11:38:01.564375 kubelet[2105]: E0715 11:38:01.564315 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-swgg9" podUID="9513186e-84fa-49d1-893d-fcd495764a33" Jul 15 11:38:02.575033 env[1314]: time="2025-07-15T11:38:02.574961255Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:38:02.592793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96c55eb08160e219a4b69038bf0a85ed8b86cbf778db8d73f7cd1d3ded912338-rootfs.mount: Deactivated successfully. Jul 15 11:38:02.596126 env[1314]: time="2025-07-15T11:38:02.596063199Z" level=info msg="shim disconnected" id=96c55eb08160e219a4b69038bf0a85ed8b86cbf778db8d73f7cd1d3ded912338 Jul 15 11:38:02.596208 env[1314]: time="2025-07-15T11:38:02.596126729Z" level=warning msg="cleaning up after shim disconnected" id=96c55eb08160e219a4b69038bf0a85ed8b86cbf778db8d73f7cd1d3ded912338 namespace=k8s.io Jul 15 11:38:02.596208 env[1314]: time="2025-07-15T11:38:02.596140165Z" level=info msg="cleaning up dead shim" Jul 15 11:38:02.603651 env[1314]: time="2025-07-15T11:38:02.603613924Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:38:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2868 runtime=io.containerd.runc.v2\n" Jul 15 11:38:02.629984 kubelet[2105]: I0715 11:38:02.629948 2105 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 11:38:02.746004 kubelet[2105]: I0715 11:38:02.745968 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5457e7a6-68d9-4a56-8b35-b756347df804-config\") pod \"goldmane-58fd7646b9-d6792\" (UID: \"5457e7a6-68d9-4a56-8b35-b756347df804\") " pod="calico-system/goldmane-58fd7646b9-d6792" Jul 15 11:38:02.746004 kubelet[2105]: I0715 11:38:02.746004 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66f25f04-1422-4094-b0a1-c4395744d9e1-whisker-backend-key-pair\") pod \"whisker-848d5c4469-g74sb\" (UID: \"66f25f04-1422-4094-b0a1-c4395744d9e1\") " pod="calico-system/whisker-848d5c4469-g74sb" Jul 15 11:38:02.746004 kubelet[2105]: I0715 11:38:02.746020 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8jq\" (UniqueName: \"kubernetes.io/projected/3f1c906d-fd33-4115-b0b0-35d63313ac89-kube-api-access-2r8jq\") pod \"calico-apiserver-548f644bc4-kt62w\" (UID: \"3f1c906d-fd33-4115-b0b0-35d63313ac89\") " pod="calico-apiserver/calico-apiserver-548f644bc4-kt62w" Jul 15 11:38:02.746290 kubelet[2105]: I0715 11:38:02.746037 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm88h\" (UniqueName: \"kubernetes.io/projected/025841db-a9f5-430b-a1a5-f023b95f1b83-kube-api-access-lm88h\") pod \"calico-kube-controllers-54b4db4784-n8kns\" (UID: \"025841db-a9f5-430b-a1a5-f023b95f1b83\") " pod="calico-system/calico-kube-controllers-54b4db4784-n8kns" Jul 15 11:38:02.746290 kubelet[2105]: I0715 11:38:02.746057 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnsjc\" (UniqueName: \"kubernetes.io/projected/c60b401d-8be3-4942-8e09-43794a037070-kube-api-access-gnsjc\") pod \"calico-apiserver-548f644bc4-2cx2f\" (UID: \"c60b401d-8be3-4942-8e09-43794a037070\") " pod="calico-apiserver/calico-apiserver-548f644bc4-2cx2f" Jul 15 11:38:02.746290 kubelet[2105]: I0715 11:38:02.746090 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tcj\" (UniqueName: \"kubernetes.io/projected/66f25f04-1422-4094-b0a1-c4395744d9e1-kube-api-access-g5tcj\") pod \"whisker-848d5c4469-g74sb\" (UID: \"66f25f04-1422-4094-b0a1-c4395744d9e1\") " pod="calico-system/whisker-848d5c4469-g74sb" Jul 15 11:38:02.746290 kubelet[2105]: I0715 11:38:02.746108 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7f801d7-4928-4dc4-8fb8-d3b03f14ceff-config-volume\") pod \"coredns-7c65d6cfc9-mgvww\" (UID: \"a7f801d7-4928-4dc4-8fb8-d3b03f14ceff\") " pod="kube-system/coredns-7c65d6cfc9-mgvww" Jul 15 11:38:02.746290 kubelet[2105]: I0715 11:38:02.746124 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3f1c906d-fd33-4115-b0b0-35d63313ac89-calico-apiserver-certs\") pod \"calico-apiserver-548f644bc4-kt62w\" (UID: \"3f1c906d-fd33-4115-b0b0-35d63313ac89\") " pod="calico-apiserver/calico-apiserver-548f644bc4-kt62w" Jul 15 11:38:02.746499 kubelet[2105]: I0715 11:38:02.746142 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9489b1af-289a-4806-935c-bff657fb9645-config-volume\") pod \"coredns-7c65d6cfc9-sbpvz\" (UID: \"9489b1af-289a-4806-935c-bff657fb9645\") " pod="kube-system/coredns-7c65d6cfc9-sbpvz" Jul 15 11:38:02.746499 kubelet[2105]: I0715 11:38:02.746169 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66f25f04-1422-4094-b0a1-c4395744d9e1-whisker-ca-bundle\") pod \"whisker-848d5c4469-g74sb\" (UID: \"66f25f04-1422-4094-b0a1-c4395744d9e1\") " pod="calico-system/whisker-848d5c4469-g74sb" Jul 15 11:38:02.746499 kubelet[2105]: I0715 11:38:02.746282 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5457e7a6-68d9-4a56-8b35-b756347df804-goldmane-key-pair\") pod \"goldmane-58fd7646b9-d6792\" (UID: \"5457e7a6-68d9-4a56-8b35-b756347df804\") " pod="calico-system/goldmane-58fd7646b9-d6792" Jul 15 11:38:02.746499 kubelet[2105]: I0715 11:38:02.746350 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c60b401d-8be3-4942-8e09-43794a037070-calico-apiserver-certs\") pod \"calico-apiserver-548f644bc4-2cx2f\" (UID: \"c60b401d-8be3-4942-8e09-43794a037070\") " pod="calico-apiserver/calico-apiserver-548f644bc4-2cx2f" Jul 15 11:38:02.746499 kubelet[2105]: I0715 11:38:02.746398 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/025841db-a9f5-430b-a1a5-f023b95f1b83-tigera-ca-bundle\") pod \"calico-kube-controllers-54b4db4784-n8kns\" (UID: \"025841db-a9f5-430b-a1a5-f023b95f1b83\") " pod="calico-system/calico-kube-controllers-54b4db4784-n8kns" Jul 15 11:38:02.746647 kubelet[2105]: I0715 11:38:02.746451 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhr2v\" (UniqueName: \"kubernetes.io/projected/5457e7a6-68d9-4a56-8b35-b756347df804-kube-api-access-bhr2v\") pod \"goldmane-58fd7646b9-d6792\" (UID: \"5457e7a6-68d9-4a56-8b35-b756347df804\") " pod="calico-system/goldmane-58fd7646b9-d6792" Jul 15 11:38:02.746647 kubelet[2105]: I0715 11:38:02.746476 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh662\" (UniqueName: \"kubernetes.io/projected/9489b1af-289a-4806-935c-bff657fb9645-kube-api-access-nh662\") pod \"coredns-7c65d6cfc9-sbpvz\" (UID: \"9489b1af-289a-4806-935c-bff657fb9645\") " pod="kube-system/coredns-7c65d6cfc9-sbpvz" Jul 15 11:38:02.746647 kubelet[2105]: I0715 11:38:02.746498 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r98mg\" (UniqueName: \"kubernetes.io/projected/a7f801d7-4928-4dc4-8fb8-d3b03f14ceff-kube-api-access-r98mg\") pod \"coredns-7c65d6cfc9-mgvww\" (UID: \"a7f801d7-4928-4dc4-8fb8-d3b03f14ceff\") " pod="kube-system/coredns-7c65d6cfc9-mgvww" Jul 15 11:38:02.746647 kubelet[2105]: I0715 11:38:02.746520 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5457e7a6-68d9-4a56-8b35-b756347df804-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-d6792\" (UID: \"5457e7a6-68d9-4a56-8b35-b756347df804\") " pod="calico-system/goldmane-58fd7646b9-d6792" Jul 15 11:38:02.892658 env[1314]: time="2025-07-15T11:38:02.892520602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 11:38:02.952514 kubelet[2105]: E0715 11:38:02.952469 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:02.953210 env[1314]: time="2025-07-15T11:38:02.953135200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mgvww,Uid:a7f801d7-4928-4dc4-8fb8-d3b03f14ceff,Namespace:kube-system,Attempt:0,}" Jul 15 11:38:02.956461 env[1314]: time="2025-07-15T11:38:02.956425685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848d5c4469-g74sb,Uid:66f25f04-1422-4094-b0a1-c4395744d9e1,Namespace:calico-system,Attempt:0,}" Jul 15 11:38:02.961948 env[1314]: time="2025-07-15T11:38:02.961911700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d6792,Uid:5457e7a6-68d9-4a56-8b35-b756347df804,Namespace:calico-system,Attempt:0,}" Jul 15 11:38:02.967324 kubelet[2105]: E0715 11:38:02.967285 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:02.967495 env[1314]: time="2025-07-15T11:38:02.967468277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b4db4784-n8kns,Uid:025841db-a9f5-430b-a1a5-f023b95f1b83,Namespace:calico-system,Attempt:0,}" Jul 15 11:38:02.968173 env[1314]: time="2025-07-15T11:38:02.967741133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbpvz,Uid:9489b1af-289a-4806-935c-bff657fb9645,Namespace:kube-system,Attempt:0,}" Jul 15 11:38:02.968389 env[1314]: time="2025-07-15T11:38:02.968340554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548f644bc4-2cx2f,Uid:c60b401d-8be3-4942-8e09-43794a037070,Namespace:calico-apiserver,Attempt:0,}" Jul 15 11:38:02.970190 env[1314]: time="2025-07-15T11:38:02.970161435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548f644bc4-kt62w,Uid:3f1c906d-fd33-4115-b0b0-35d63313ac89,Namespace:calico-apiserver,Attempt:0,}" Jul 15 11:38:03.116731 env[1314]: time="2025-07-15T11:38:03.116667257Z" level=error msg="Failed to destroy network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.117138 env[1314]: time="2025-07-15T11:38:03.117099944Z" level=error msg="encountered an error cleaning up failed sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.117195 env[1314]: time="2025-07-15T11:38:03.117144918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548f644bc4-2cx2f,Uid:c60b401d-8be3-4942-8e09-43794a037070,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.117419 kubelet[2105]: E0715 11:38:03.117369 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.117511 kubelet[2105]: E0715 11:38:03.117447 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548f644bc4-2cx2f" Jul 15 11:38:03.117511 kubelet[2105]: E0715 11:38:03.117470 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548f644bc4-2cx2f" Jul 15 11:38:03.117565 kubelet[2105]: E0715 11:38:03.117508 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548f644bc4-2cx2f_calico-apiserver(c60b401d-8be3-4942-8e09-43794a037070)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548f644bc4-2cx2f_calico-apiserver(c60b401d-8be3-4942-8e09-43794a037070)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548f644bc4-2cx2f" podUID="c60b401d-8be3-4942-8e09-43794a037070" Jul 15 11:38:03.122601 env[1314]: time="2025-07-15T11:38:03.122549554Z" level=error msg="Failed to destroy network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.122909 env[1314]: time="2025-07-15T11:38:03.122885628Z" level=error msg="encountered an error cleaning up failed sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.122971 env[1314]: time="2025-07-15T11:38:03.122931084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b4db4784-n8kns,Uid:025841db-a9f5-430b-a1a5-f023b95f1b83,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.123095 kubelet[2105]: E0715 11:38:03.123069 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.123169 kubelet[2105]: E0715 11:38:03.123111 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b4db4784-n8kns" Jul 15 11:38:03.123169 kubelet[2105]: E0715 11:38:03.123127 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b4db4784-n8kns" Jul 15 11:38:03.123219 kubelet[2105]: E0715 11:38:03.123158 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54b4db4784-n8kns_calico-system(025841db-a9f5-430b-a1a5-f023b95f1b83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54b4db4784-n8kns_calico-system(025841db-a9f5-430b-a1a5-f023b95f1b83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54b4db4784-n8kns" podUID="025841db-a9f5-430b-a1a5-f023b95f1b83" Jul 15 11:38:03.129603 env[1314]: time="2025-07-15T11:38:03.129558258Z" level=error msg="Failed to destroy network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.129879 env[1314]: time="2025-07-15T11:38:03.129851281Z" level=error msg="encountered an error cleaning up failed sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.129928 env[1314]: time="2025-07-15T11:38:03.129893210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848d5c4469-g74sb,Uid:66f25f04-1422-4094-b0a1-c4395744d9e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.130090 kubelet[2105]: E0715 11:38:03.130060 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.130140 kubelet[2105]: E0715 11:38:03.130093 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-848d5c4469-g74sb" Jul 15 11:38:03.130140 kubelet[2105]: E0715 11:38:03.130108 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-848d5c4469-g74sb" Jul 15 11:38:03.130196 kubelet[2105]: E0715 11:38:03.130148 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-848d5c4469-g74sb_calico-system(66f25f04-1422-4094-b0a1-c4395744d9e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-848d5c4469-g74sb_calico-system(66f25f04-1422-4094-b0a1-c4395744d9e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-848d5c4469-g74sb" podUID="66f25f04-1422-4094-b0a1-c4395744d9e1" Jul 15 11:38:03.138547 env[1314]: time="2025-07-15T11:38:03.138478250Z" level=error msg="Failed to destroy network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.139047 env[1314]: time="2025-07-15T11:38:03.139020845Z" level=error msg="encountered an error cleaning up failed sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.139156 env[1314]: time="2025-07-15T11:38:03.139128659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mgvww,Uid:a7f801d7-4928-4dc4-8fb8-d3b03f14ceff,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.139475 kubelet[2105]: E0715 11:38:03.139429 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.139541 kubelet[2105]: E0715 11:38:03.139507 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mgvww" Jul 15 11:38:03.139541 kubelet[2105]: E0715 11:38:03.139525 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mgvww" Jul 15 11:38:03.139596 kubelet[2105]: E0715 11:38:03.139575 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mgvww_kube-system(a7f801d7-4928-4dc4-8fb8-d3b03f14ceff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mgvww_kube-system(a7f801d7-4928-4dc4-8fb8-d3b03f14ceff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mgvww" podUID="a7f801d7-4928-4dc4-8fb8-d3b03f14ceff" Jul 15 11:38:03.150456 env[1314]: time="2025-07-15T11:38:03.149228329Z" level=error msg="Failed to destroy network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.150897 env[1314]: time="2025-07-15T11:38:03.150873426Z" level=error msg="encountered an error cleaning up failed sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.151012 env[1314]: time="2025-07-15T11:38:03.150985136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbpvz,Uid:9489b1af-289a-4806-935c-bff657fb9645,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.151321 kubelet[2105]: E0715 11:38:03.151283 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.151387 kubelet[2105]: E0715 11:38:03.151348 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sbpvz" Jul 15 11:38:03.151387 kubelet[2105]: E0715 11:38:03.151366 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sbpvz" Jul 15 11:38:03.151461 kubelet[2105]: E0715 11:38:03.151420 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sbpvz_kube-system(9489b1af-289a-4806-935c-bff657fb9645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sbpvz_kube-system(9489b1af-289a-4806-935c-bff657fb9645)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sbpvz" podUID="9489b1af-289a-4806-935c-bff657fb9645" Jul 15 11:38:03.154229 env[1314]: time="2025-07-15T11:38:03.154163718Z" level=error msg="Failed to destroy network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.154732 env[1314]: time="2025-07-15T11:38:03.154680223Z" level=error msg="encountered an error cleaning up failed sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.154732 env[1314]: time="2025-07-15T11:38:03.154728584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548f644bc4-kt62w,Uid:3f1c906d-fd33-4115-b0b0-35d63313ac89,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.154946 kubelet[2105]: E0715 11:38:03.154863 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.154946 kubelet[2105]: E0715 11:38:03.154894 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548f644bc4-kt62w" Jul 15 11:38:03.154946 kubelet[2105]: E0715 11:38:03.154909 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548f644bc4-kt62w" Jul 15 11:38:03.155036 kubelet[2105]: E0715 11:38:03.154936 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548f644bc4-kt62w_calico-apiserver(3f1c906d-fd33-4115-b0b0-35d63313ac89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548f644bc4-kt62w_calico-apiserver(3f1c906d-fd33-4115-b0b0-35d63313ac89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548f644bc4-kt62w" podUID="3f1c906d-fd33-4115-b0b0-35d63313ac89" Jul 15 11:38:03.157902 env[1314]: time="2025-07-15T11:38:03.157843967Z" level=error msg="Failed to destroy network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.158133 env[1314]: time="2025-07-15T11:38:03.158096795Z" level=error msg="encountered an error cleaning up failed sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.158172 env[1314]: time="2025-07-15T11:38:03.158142009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d6792,Uid:5457e7a6-68d9-4a56-8b35-b756347df804,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.158287 kubelet[2105]: E0715 11:38:03.158263 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.158399 kubelet[2105]: E0715 11:38:03.158293 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-d6792" Jul 15 11:38:03.158399 kubelet[2105]: E0715 11:38:03.158306 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-d6792" Jul 15 11:38:03.158399 kubelet[2105]: E0715 11:38:03.158334 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-d6792_calico-system(5457e7a6-68d9-4a56-8b35-b756347df804)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-d6792_calico-system(5457e7a6-68d9-4a56-8b35-b756347df804)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-d6792" podUID="5457e7a6-68d9-4a56-8b35-b756347df804" Jul 15 11:38:03.567074 env[1314]: time="2025-07-15T11:38:03.567031920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgg9,Uid:9513186e-84fa-49d1-893d-fcd495764a33,Namespace:calico-system,Attempt:0,}" Jul 15 11:38:03.615711 env[1314]: time="2025-07-15T11:38:03.615654931Z" level=error msg="Failed to destroy network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.616595 env[1314]: time="2025-07-15T11:38:03.616559940Z" level=error msg="encountered an error cleaning up failed sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.616655 env[1314]: time="2025-07-15T11:38:03.616620504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgg9,Uid:9513186e-84fa-49d1-893d-fcd495764a33,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.616921 kubelet[2105]: E0715 11:38:03.616878 2105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.616980 kubelet[2105]: E0715 11:38:03.616954 2105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-swgg9" Jul 15 11:38:03.617010 kubelet[2105]: E0715 11:38:03.616980 2105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-swgg9" Jul 15 11:38:03.617066 kubelet[2105]: E0715 11:38:03.617035 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-swgg9_calico-system(9513186e-84fa-49d1-893d-fcd495764a33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-swgg9_calico-system(9513186e-84fa-49d1-893d-fcd495764a33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-swgg9" podUID="9513186e-84fa-49d1-893d-fcd495764a33" Jul 15 11:38:03.617982 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f-shm.mount: Deactivated successfully. Jul 15 11:38:03.892976 kubelet[2105]: I0715 11:38:03.892890 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:03.893539 env[1314]: time="2025-07-15T11:38:03.893503688Z" level=info msg="StopPodSandbox for \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\"" Jul 15 11:38:03.894157 kubelet[2105]: I0715 11:38:03.893934 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:03.894257 env[1314]: time="2025-07-15T11:38:03.894217837Z" level=info msg="StopPodSandbox for \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\"" Jul 15 11:38:03.895216 kubelet[2105]: I0715 11:38:03.895195 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:03.896911 env[1314]: time="2025-07-15T11:38:03.896591238Z" level=info msg="StopPodSandbox for \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\"" Jul 15 11:38:03.897414 kubelet[2105]: I0715 11:38:03.897379 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:03.897757 env[1314]: time="2025-07-15T11:38:03.897737002Z" level=info msg="StopPodSandbox for \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\"" Jul 15 11:38:03.899361 kubelet[2105]: I0715 11:38:03.899073 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:03.899484 env[1314]: time="2025-07-15T11:38:03.899450818Z" level=info msg="StopPodSandbox for \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\"" Jul 15 11:38:03.901963 kubelet[2105]: I0715 11:38:03.901919 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:03.903376 env[1314]: time="2025-07-15T11:38:03.903352264Z" level=info msg="StopPodSandbox for \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\"" Jul 15 11:38:03.903872 kubelet[2105]: I0715 11:38:03.903842 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:03.904299 env[1314]: time="2025-07-15T11:38:03.904229341Z" level=info msg="StopPodSandbox for \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\"" Jul 15 11:38:03.906768 kubelet[2105]: I0715 11:38:03.906737 2105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:03.907778 env[1314]: time="2025-07-15T11:38:03.907753976Z" level=info msg="StopPodSandbox for \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\"" Jul 15 11:38:03.926744 env[1314]: time="2025-07-15T11:38:03.926670634Z" level=error msg="StopPodSandbox for \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\" failed" error="failed to destroy network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.927046 kubelet[2105]: E0715 11:38:03.926953 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:03.927161 kubelet[2105]: E0715 11:38:03.927033 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5"} Jul 15 11:38:03.927161 kubelet[2105]: E0715 11:38:03.927112 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"025841db-a9f5-430b-a1a5-f023b95f1b83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:38:03.927161 kubelet[2105]: E0715 11:38:03.927144 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"025841db-a9f5-430b-a1a5-f023b95f1b83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54b4db4784-n8kns" podUID="025841db-a9f5-430b-a1a5-f023b95f1b83" Jul 15 11:38:03.942270 env[1314]: time="2025-07-15T11:38:03.942201820Z" level=error msg="StopPodSandbox for \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\" failed" error="failed to destroy network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.942629 kubelet[2105]: E0715 11:38:03.942592 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:03.942700 kubelet[2105]: E0715 11:38:03.942640 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45"} Jul 15 11:38:03.942700 kubelet[2105]: E0715 11:38:03.942670 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66f25f04-1422-4094-b0a1-c4395744d9e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:38:03.942857 kubelet[2105]: E0715 11:38:03.942689 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66f25f04-1422-4094-b0a1-c4395744d9e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-848d5c4469-g74sb" podUID="66f25f04-1422-4094-b0a1-c4395744d9e1" Jul 15 11:38:03.943790 env[1314]: time="2025-07-15T11:38:03.943748010Z" level=error msg="StopPodSandbox for \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\" failed" error="failed to destroy network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.943933 kubelet[2105]: E0715 11:38:03.943900 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:03.943998 kubelet[2105]: E0715 11:38:03.943935 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b"} Jul 15 11:38:03.943998 kubelet[2105]: E0715 11:38:03.943968 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5457e7a6-68d9-4a56-8b35-b756347df804\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:38:03.944096 kubelet[2105]: E0715 11:38:03.943993 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5457e7a6-68d9-4a56-8b35-b756347df804\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-d6792" podUID="5457e7a6-68d9-4a56-8b35-b756347df804" Jul 15 11:38:03.954181 env[1314]: time="2025-07-15T11:38:03.954100077Z" level=error msg="StopPodSandbox for \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\" failed" error="failed to destroy network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.956910 kubelet[2105]: E0715 11:38:03.956866 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:03.956998 kubelet[2105]: E0715 11:38:03.956915 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94"} Jul 15 11:38:03.956998 kubelet[2105]: E0715 11:38:03.956942 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f1c906d-fd33-4115-b0b0-35d63313ac89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:38:03.957093 kubelet[2105]: E0715 11:38:03.956960 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f1c906d-fd33-4115-b0b0-35d63313ac89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548f644bc4-kt62w" podUID="3f1c906d-fd33-4115-b0b0-35d63313ac89" Jul 15 11:38:03.960895 env[1314]: time="2025-07-15T11:38:03.960836206Z" level=error msg="StopPodSandbox for \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\" failed" error="failed to destroy network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.961195 kubelet[2105]: E0715 11:38:03.961166 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:03.961284 kubelet[2105]: E0715 11:38:03.961197 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494"} Jul 15 11:38:03.961284 kubelet[2105]: E0715 11:38:03.961219 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9489b1af-289a-4806-935c-bff657fb9645\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:38:03.961284 kubelet[2105]: E0715 11:38:03.961236 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9489b1af-289a-4806-935c-bff657fb9645\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sbpvz" podUID="9489b1af-289a-4806-935c-bff657fb9645" Jul 15 11:38:03.963785 env[1314]: time="2025-07-15T11:38:03.963753635Z" level=error msg="StopPodSandbox for \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\" failed" error="failed to destroy network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.963998 kubelet[2105]: E0715 11:38:03.963955 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:03.963998 kubelet[2105]: E0715 11:38:03.963989 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78"} Jul 15 11:38:03.964096 kubelet[2105]: E0715 11:38:03.964008 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7f801d7-4928-4dc4-8fb8-d3b03f14ceff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:38:03.964096 kubelet[2105]: E0715 11:38:03.964025 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7f801d7-4928-4dc4-8fb8-d3b03f14ceff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mgvww" podUID="a7f801d7-4928-4dc4-8fb8-d3b03f14ceff" Jul 15 11:38:03.969565 env[1314]: time="2025-07-15T11:38:03.969515424Z" level=error msg="StopPodSandbox for \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\" failed" error="failed to destroy network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.969993 kubelet[2105]: E0715 11:38:03.969866 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:03.969993 kubelet[2105]: E0715 11:38:03.969914 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f"} Jul 15 11:38:03.969993 kubelet[2105]: E0715 11:38:03.969942 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9513186e-84fa-49d1-893d-fcd495764a33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:38:03.969993 kubelet[2105]: E0715 11:38:03.969962 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9513186e-84fa-49d1-893d-fcd495764a33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-swgg9" podUID="9513186e-84fa-49d1-893d-fcd495764a33" Jul 15 11:38:03.974050 env[1314]: time="2025-07-15T11:38:03.974004440Z" level=error msg="StopPodSandbox for \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\" failed" error="failed to destroy network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 11:38:03.974252 kubelet[2105]: E0715 11:38:03.974223 2105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:03.974304 kubelet[2105]: E0715 11:38:03.974258 2105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd"} Jul 15 11:38:03.974304 kubelet[2105]: E0715 11:38:03.974277 2105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c60b401d-8be3-4942-8e09-43794a037070\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 15 11:38:03.974304 kubelet[2105]: E0715 11:38:03.974292 2105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c60b401d-8be3-4942-8e09-43794a037070\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548f644bc4-2cx2f" podUID="c60b401d-8be3-4942-8e09-43794a037070" Jul 15 11:38:09.158299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845932723.mount: Deactivated successfully. Jul 15 11:38:10.313091 env[1314]: time="2025-07-15T11:38:10.313031455Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:10.315262 env[1314]: time="2025-07-15T11:38:10.315223323Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:10.317473 env[1314]: time="2025-07-15T11:38:10.317455557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:10.319735 env[1314]: time="2025-07-15T11:38:10.319712207Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:10.320415 env[1314]: time="2025-07-15T11:38:10.320388060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 11:38:10.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.133:22-10.0.0.1:45564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:10.323761 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:45564.service. Jul 15 11:38:10.330844 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 15 11:38:10.330907 kernel: audit: type=1130 audit(1752579490.322:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.133:22-10.0.0.1:45564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:10.333206 env[1314]: time="2025-07-15T11:38:10.331852180Z" level=info msg="CreateContainer within sandbox \"e75f63cbe298afb24493aea77b5385ce3525b6e6050e772c92f488d2ce078a91\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 11:38:10.357677 env[1314]: time="2025-07-15T11:38:10.357630117Z" level=info msg="CreateContainer within sandbox \"e75f63cbe298afb24493aea77b5385ce3525b6e6050e772c92f488d2ce078a91\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e1bb48846b13258b703a30c865fdab3841038e8a0ac55904fc31ee8040839df7\"" Jul 15 11:38:10.358280 env[1314]: time="2025-07-15T11:38:10.358252479Z" level=info msg="StartContainer for \"e1bb48846b13258b703a30c865fdab3841038e8a0ac55904fc31ee8040839df7\"" Jul 15 11:38:10.367000 audit[3314]: USER_ACCT pid=3314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.368530 sshd[3314]: Accepted publickey for core from 10.0.0.1 port 45564 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:10.371000 audit[3314]: CRED_ACQ pid=3314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.373054 sshd[3314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:10.376323 kernel: audit: type=1101 audit(1752579490.367:281): pid=3314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.376374 kernel: audit: type=1103 audit(1752579490.371:282): pid=3314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.376392 kernel: audit: type=1006 audit(1752579490.371:283): pid=3314 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 15 11:38:10.378462 kernel: audit: type=1300 audit(1752579490.371:283): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd53273b60 a2=3 a3=0 items=0 ppid=1 pid=3314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:10.371000 audit[3314]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd53273b60 a2=3 a3=0 items=0 ppid=1 pid=3314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:10.378596 systemd[1]: Started session-8.scope. Jul 15 11:38:10.379357 systemd-logind[1296]: New session 8 of user core. Jul 15 11:38:10.371000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:10.382000 audit[3314]: USER_START pid=3314 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.391277 kernel: audit: type=1327 audit(1752579490.371:283): proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:10.391399 kernel: audit: type=1105 audit(1752579490.382:284): pid=3314 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.391420 kernel: audit: type=1103 audit(1752579490.383:285): pid=3336 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.383000 audit[3336]: CRED_ACQ pid=3336 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.412897 env[1314]: time="2025-07-15T11:38:10.412853468Z" level=info msg="StartContainer for \"e1bb48846b13258b703a30c865fdab3841038e8a0ac55904fc31ee8040839df7\" returns successfully" Jul 15 11:38:10.495620 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 11:38:10.495727 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 11:38:10.529083 sshd[3314]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:10.528000 audit[3314]: USER_END pid=3314 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.531239 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:45564.service: Deactivated successfully. Jul 15 11:38:10.532536 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 11:38:10.535344 systemd-logind[1296]: Session 8 logged out. Waiting for processes to exit. Jul 15 11:38:10.539437 kernel: audit: type=1106 audit(1752579490.528:286): pid=3314 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.539485 kernel: audit: type=1104 audit(1752579490.528:287): pid=3314 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.528000 audit[3314]: CRED_DISP pid=3314 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:10.538518 systemd-logind[1296]: Removed session 8. Jul 15 11:38:10.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.133:22-10.0.0.1:45564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:10.592075 env[1314]: time="2025-07-15T11:38:10.591955187Z" level=info msg="StopPodSandbox for \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\"" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.650 [INFO][3397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.651 [INFO][3397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" iface="eth0" netns="/var/run/netns/cni-6c0caae9-5a0b-178b-d5b9-2875ed0c827b" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.651 [INFO][3397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" iface="eth0" netns="/var/run/netns/cni-6c0caae9-5a0b-178b-d5b9-2875ed0c827b" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.652 [INFO][3397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" iface="eth0" netns="/var/run/netns/cni-6c0caae9-5a0b-178b-d5b9-2875ed0c827b" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.652 [INFO][3397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.652 [INFO][3397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.698 [INFO][3406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" HandleID="k8s-pod-network.ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Workload="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.698 [INFO][3406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.698 [INFO][3406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.706 [WARNING][3406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" HandleID="k8s-pod-network.ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Workload="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.707 [INFO][3406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" HandleID="k8s-pod-network.ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Workload="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.708 [INFO][3406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:10.711819 env[1314]: 2025-07-15 11:38:10.710 [INFO][3397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:10.712305 env[1314]: time="2025-07-15T11:38:10.711964157Z" level=info msg="TearDown network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\" successfully" Jul 15 11:38:10.712305 env[1314]: time="2025-07-15T11:38:10.711993262Z" level=info msg="StopPodSandbox for \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\" returns successfully" Jul 15 11:38:10.789148 kubelet[2105]: I0715 11:38:10.789090 2105 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5tcj\" (UniqueName: \"kubernetes.io/projected/66f25f04-1422-4094-b0a1-c4395744d9e1-kube-api-access-g5tcj\") pod \"66f25f04-1422-4094-b0a1-c4395744d9e1\" (UID: \"66f25f04-1422-4094-b0a1-c4395744d9e1\") " Jul 15 11:38:10.789148 kubelet[2105]: I0715 11:38:10.789133 2105 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66f25f04-1422-4094-b0a1-c4395744d9e1-whisker-ca-bundle\") pod \"66f25f04-1422-4094-b0a1-c4395744d9e1\" (UID: \"66f25f04-1422-4094-b0a1-c4395744d9e1\") " Jul 15 11:38:10.789581 kubelet[2105]: I0715 11:38:10.789174 2105 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66f25f04-1422-4094-b0a1-c4395744d9e1-whisker-backend-key-pair\") pod \"66f25f04-1422-4094-b0a1-c4395744d9e1\" (UID: \"66f25f04-1422-4094-b0a1-c4395744d9e1\") " Jul 15 11:38:10.789581 kubelet[2105]: I0715 11:38:10.789544 2105 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66f25f04-1422-4094-b0a1-c4395744d9e1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "66f25f04-1422-4094-b0a1-c4395744d9e1" (UID: "66f25f04-1422-4094-b0a1-c4395744d9e1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 11:38:10.791755 kubelet[2105]: I0715 11:38:10.791711 2105 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f25f04-1422-4094-b0a1-c4395744d9e1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "66f25f04-1422-4094-b0a1-c4395744d9e1" (UID: "66f25f04-1422-4094-b0a1-c4395744d9e1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 11:38:10.792409 kubelet[2105]: I0715 11:38:10.792375 2105 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f25f04-1422-4094-b0a1-c4395744d9e1-kube-api-access-g5tcj" (OuterVolumeSpecName: "kube-api-access-g5tcj") pod "66f25f04-1422-4094-b0a1-c4395744d9e1" (UID: "66f25f04-1422-4094-b0a1-c4395744d9e1"). InnerVolumeSpecName "kube-api-access-g5tcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:38:10.889964 kubelet[2105]: I0715 11:38:10.889844 2105 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5tcj\" (UniqueName: \"kubernetes.io/projected/66f25f04-1422-4094-b0a1-c4395744d9e1-kube-api-access-g5tcj\") on node \"localhost\" DevicePath \"\"" Jul 15 11:38:10.889964 kubelet[2105]: I0715 11:38:10.889874 2105 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66f25f04-1422-4094-b0a1-c4395744d9e1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 11:38:10.889964 kubelet[2105]: I0715 11:38:10.889881 2105 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66f25f04-1422-4094-b0a1-c4395744d9e1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 11:38:10.949955 kubelet[2105]: I0715 11:38:10.949896 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kl4nd" podStartSLOduration=1.175798631 podStartE2EDuration="18.949877515s" podCreationTimestamp="2025-07-15 11:37:52 +0000 UTC" firstStartedPulling="2025-07-15 11:37:52.547727828 +0000 UTC m=+17.074247373" lastFinishedPulling="2025-07-15 11:38:10.321806722 +0000 UTC m=+34.848326257" observedRunningTime="2025-07-15 11:38:10.949201371 +0000 UTC m=+35.475720916" watchObservedRunningTime="2025-07-15 11:38:10.949877515 +0000 UTC m=+35.476397060" Jul 15 11:38:10.990321 kubelet[2105]: I0715 11:38:10.990277 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ee8c6a3-99a2-43e1-9727-ce93933762ec-whisker-backend-key-pair\") pod \"whisker-57fb689d55-qjqrd\" (UID: \"4ee8c6a3-99a2-43e1-9727-ce93933762ec\") " pod="calico-system/whisker-57fb689d55-qjqrd" Jul 15 11:38:10.990321 kubelet[2105]: I0715 11:38:10.990327 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ee8c6a3-99a2-43e1-9727-ce93933762ec-whisker-ca-bundle\") pod \"whisker-57fb689d55-qjqrd\" (UID: \"4ee8c6a3-99a2-43e1-9727-ce93933762ec\") " pod="calico-system/whisker-57fb689d55-qjqrd" Jul 15 11:38:10.990506 kubelet[2105]: I0715 11:38:10.990345 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj74\" (UniqueName: \"kubernetes.io/projected/4ee8c6a3-99a2-43e1-9727-ce93933762ec-kube-api-access-vrj74\") pod \"whisker-57fb689d55-qjqrd\" (UID: \"4ee8c6a3-99a2-43e1-9727-ce93933762ec\") " pod="calico-system/whisker-57fb689d55-qjqrd" Jul 15 11:38:11.259790 env[1314]: time="2025-07-15T11:38:11.259665543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fb689d55-qjqrd,Uid:4ee8c6a3-99a2-43e1-9727-ce93933762ec,Namespace:calico-system,Attempt:0,}" Jul 15 11:38:11.329165 systemd[1]: run-netns-cni\x2d6c0caae9\x2d5a0b\x2d178b\x2dd5b9\x2d2875ed0c827b.mount: Deactivated successfully. Jul 15 11:38:11.329309 systemd[1]: var-lib-kubelet-pods-66f25f04\x2d1422\x2d4094\x2db0a1\x2dc4395744d9e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg5tcj.mount: Deactivated successfully. Jul 15 11:38:11.329398 systemd[1]: var-lib-kubelet-pods-66f25f04\x2d1422\x2d4094\x2db0a1\x2dc4395744d9e1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 11:38:11.472273 systemd-networkd[1089]: calibac8cfe0bc4: Link UP Jul 15 11:38:11.474326 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:38:11.474435 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibac8cfe0bc4: link becomes ready Jul 15 11:38:11.474575 systemd-networkd[1089]: calibac8cfe0bc4: Gained carrier Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.405 [INFO][3428] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.415 [INFO][3428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--57fb689d55--qjqrd-eth0 whisker-57fb689d55- calico-system 4ee8c6a3-99a2-43e1-9727-ce93933762ec 912 0 2025-07-15 11:38:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57fb689d55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-57fb689d55-qjqrd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibac8cfe0bc4 [] [] }} ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Namespace="calico-system" Pod="whisker-57fb689d55-qjqrd" WorkloadEndpoint="localhost-k8s-whisker--57fb689d55--qjqrd-" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.416 [INFO][3428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Namespace="calico-system" Pod="whisker-57fb689d55-qjqrd" WorkloadEndpoint="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.438 [INFO][3443] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" HandleID="k8s-pod-network.eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Workload="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.439 [INFO][3443] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" HandleID="k8s-pod-network.eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Workload="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-57fb689d55-qjqrd", "timestamp":"2025-07-15 11:38:11.438886118 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.439 [INFO][3443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.439 [INFO][3443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.439 [INFO][3443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.444 [INFO][3443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" host="localhost" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.448 [INFO][3443] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.452 [INFO][3443] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.453 [INFO][3443] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.455 [INFO][3443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.455 [INFO][3443] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" host="localhost" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.456 [INFO][3443] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3 Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.459 [INFO][3443] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" host="localhost" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.462 [INFO][3443] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" host="localhost" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.462 [INFO][3443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" host="localhost" Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.462 [INFO][3443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:11.484308 env[1314]: 2025-07-15 11:38:11.463 [INFO][3443] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" HandleID="k8s-pod-network.eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Workload="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" Jul 15 11:38:11.485002 env[1314]: 2025-07-15 11:38:11.465 [INFO][3428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Namespace="calico-system" Pod="whisker-57fb689d55-qjqrd" WorkloadEndpoint="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57fb689d55--qjqrd-eth0", GenerateName:"whisker-57fb689d55-", Namespace:"calico-system", SelfLink:"", UID:"4ee8c6a3-99a2-43e1-9727-ce93933762ec", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 38, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57fb689d55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-57fb689d55-qjqrd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibac8cfe0bc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:11.485002 env[1314]: 2025-07-15 11:38:11.465 [INFO][3428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Namespace="calico-system" Pod="whisker-57fb689d55-qjqrd" WorkloadEndpoint="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" Jul 15 11:38:11.485002 env[1314]: 2025-07-15 11:38:11.465 [INFO][3428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibac8cfe0bc4 ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Namespace="calico-system" Pod="whisker-57fb689d55-qjqrd" WorkloadEndpoint="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" Jul 15 11:38:11.485002 env[1314]: 2025-07-15 11:38:11.474 [INFO][3428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Namespace="calico-system" Pod="whisker-57fb689d55-qjqrd" WorkloadEndpoint="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" Jul 15 11:38:11.485002 env[1314]: 2025-07-15 11:38:11.474 [INFO][3428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Namespace="calico-system" Pod="whisker-57fb689d55-qjqrd" WorkloadEndpoint="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57fb689d55--qjqrd-eth0", GenerateName:"whisker-57fb689d55-", Namespace:"calico-system", SelfLink:"", UID:"4ee8c6a3-99a2-43e1-9727-ce93933762ec", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 38, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57fb689d55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3", Pod:"whisker-57fb689d55-qjqrd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibac8cfe0bc4", MAC:"6a:23:d7:ba:cf:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:11.485002 env[1314]: 2025-07-15 11:38:11.482 [INFO][3428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3" Namespace="calico-system" Pod="whisker-57fb689d55-qjqrd" WorkloadEndpoint="localhost-k8s-whisker--57fb689d55--qjqrd-eth0" Jul 15 11:38:11.493135 env[1314]: time="2025-07-15T11:38:11.492992045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:38:11.493135 env[1314]: time="2025-07-15T11:38:11.493028444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:38:11.493135 env[1314]: time="2025-07-15T11:38:11.493037781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:38:11.493322 env[1314]: time="2025-07-15T11:38:11.493177074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3 pid=3466 runtime=io.containerd.runc.v2 Jul 15 11:38:11.515771 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:38:11.538474 env[1314]: time="2025-07-15T11:38:11.538432689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fb689d55-qjqrd,Uid:4ee8c6a3-99a2-43e1-9727-ce93933762ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3\"" Jul 15 11:38:11.539681 env[1314]: time="2025-07-15T11:38:11.539660711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 11:38:11.565637 kubelet[2105]: I0715 11:38:11.565566 2105 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f25f04-1422-4094-b0a1-c4395744d9e1" path="/var/lib/kubelet/pods/66f25f04-1422-4094-b0a1-c4395744d9e1/volumes" Jul 15 11:38:11.825000 audit[3541]: AVC avc: denied { write } for pid=3541 comm="tee" name="fd" dev="proc" ino=24931 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:38:11.825000 audit[3541]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd422387f1 a2=241 a3=1b6 items=1 ppid=3510 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:11.825000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 15 11:38:11.825000 audit: PATH item=0 name="/dev/fd/63" inode=23455 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:38:11.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:38:11.826000 audit[3564]: AVC avc: denied { write } for pid=3564 comm="tee" name="fd" dev="proc" ino=24356 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:38:11.826000 audit[3564]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa94527f3 a2=241 a3=1b6 items=1 ppid=3518 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:11.826000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 15 11:38:11.826000 audit: PATH item=0 name="/dev/fd/63" inode=23458 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:38:11.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:38:11.835000 audit[3574]: AVC avc: denied { write } for pid=3574 comm="tee" name="fd" dev="proc" ino=24362 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:38:11.835000 audit[3574]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff4b19f7f2 a2=241 a3=1b6 items=1 ppid=3511 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:11.835000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 15 11:38:11.835000 audit: PATH item=0 name="/dev/fd/63" inode=24359 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:38:11.835000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:38:11.852000 audit[3585]: AVC avc: denied { write } for pid=3585 comm="tee" name="fd" dev="proc" ino=23475 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:38:11.857000 audit[3577]: AVC avc: denied { write } for pid=3577 comm="tee" name="fd" dev="proc" ino=24940 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:38:11.857000 audit[3577]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd5f8907f1 a2=241 a3=1b6 items=1 ppid=3522 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:11.857000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 15 11:38:11.857000 audit: PATH item=0 name="/dev/fd/63" inode=23464 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:38:11.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:38:11.852000 audit[3585]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdc70937f1 a2=241 a3=1b6 items=1 ppid=3525 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:11.852000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 15 11:38:11.852000 audit: PATH item=0 name="/dev/fd/63" inode=23470 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:38:11.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:38:11.872000 audit[3588]: AVC avc: denied { write } for pid=3588 comm="tee" name="fd" dev="proc" ino=25715 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:38:11.872000 audit[3588]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd76b167e2 a2=241 a3=1b6 items=1 ppid=3512 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:11.872000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 15 11:38:11.872000 audit: PATH item=0 name="/dev/fd/63" inode=23471 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:38:11.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:38:11.874000 audit[3590]: AVC avc: denied { write } for pid=3590 comm="tee" name="fd" dev="proc" ino=23479 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 15 11:38:11.874000 audit[3590]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce5e037e1 a2=241 a3=1b6 items=1 ppid=3516 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:11.874000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 15 11:38:11.874000 audit: PATH item=0 name="/dev/fd/63" inode=23472 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:38:11.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 15 11:38:12.829988 env[1314]: time="2025-07-15T11:38:12.829939621Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:12.832168 env[1314]: time="2025-07-15T11:38:12.832121978Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:12.833839 env[1314]: time="2025-07-15T11:38:12.833799907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:12.835528 env[1314]: time="2025-07-15T11:38:12.835494276Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:12.835902 env[1314]: time="2025-07-15T11:38:12.835868561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 11:38:12.837480 env[1314]: time="2025-07-15T11:38:12.837452702Z" level=info msg="CreateContainer within sandbox \"eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 11:38:12.850462 env[1314]: time="2025-07-15T11:38:12.850422834Z" level=info msg="CreateContainer within sandbox \"eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7a429f4be150fea5fd10198b6d3900072b16ce97a2476ccc3899492482735238\"" Jul 15 11:38:12.850885 env[1314]: time="2025-07-15T11:38:12.850858435Z" level=info msg="StartContainer for \"7a429f4be150fea5fd10198b6d3900072b16ce97a2476ccc3899492482735238\"" Jul 15 11:38:12.902057 env[1314]: time="2025-07-15T11:38:12.902008842Z" level=info msg="StartContainer for \"7a429f4be150fea5fd10198b6d3900072b16ce97a2476ccc3899492482735238\" returns successfully" Jul 15 11:38:12.903622 env[1314]: time="2025-07-15T11:38:12.903127748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 11:38:12.966983 kubelet[2105]: I0715 11:38:12.966933 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:38:12.967403 kubelet[2105]: E0715 11:38:12.967276 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:12.992000 audit[3660]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3660 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:12.992000 audit[3660]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffec4a85cb0 a2=0 a3=7ffec4a85c9c items=0 ppid=2252 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:12.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:12.998000 audit[3660]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3660 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:12.998000 audit[3660]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffec4a85cb0 a2=0 a3=7ffec4a85c9c items=0 ppid=2252 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:12.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:13.479458 systemd-networkd[1089]: calibac8cfe0bc4: Gained IPv6LL Jul 15 11:38:13.931163 kubelet[2105]: E0715 11:38:13.931128 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:14.094000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.094000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.094000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.094000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.094000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.094000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.094000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.094000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.094000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.094000 audit: BPF prog-id=10 op=LOAD Jul 15 11:38:14.094000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0bd40b50 a2=98 a3=1fffffffffffffff items=0 ppid=3670 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.094000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:38:14.095000 audit: BPF prog-id=10 op=UNLOAD Jul 15 11:38:14.095000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.095000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.095000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.095000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.095000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.095000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.095000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.095000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.095000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.095000 audit: BPF prog-id=11 op=LOAD Jul 15 11:38:14.095000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0bd40a30 a2=94 a3=3 items=0 ppid=3670 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.095000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:38:14.096000 audit: BPF prog-id=11 op=UNLOAD Jul 15 11:38:14.096000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.096000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.096000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.096000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.096000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.096000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.096000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.096000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.096000 audit[3716]: AVC avc: denied { bpf } for pid=3716 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.096000 audit: BPF prog-id=12 op=LOAD Jul 15 11:38:14.096000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0bd40a70 a2=94 a3=7ffe0bd40c50 items=0 ppid=3670 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.096000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:38:14.097000 audit: BPF prog-id=12 op=UNLOAD Jul 15 11:38:14.097000 audit[3716]: AVC avc: denied { perfmon } for pid=3716 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.097000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffe0bd40b40 a2=50 a3=a000000085 items=0 ppid=3670 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.097000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit: BPF prog-id=13 op=LOAD Jul 15 11:38:14.098000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda7ebf170 a2=98 a3=3 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.098000 audit: BPF prog-id=13 op=UNLOAD Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit: BPF prog-id=14 op=LOAD Jul 15 11:38:14.098000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda7ebef60 a2=94 a3=54428f items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.098000 audit: BPF prog-id=14 op=UNLOAD Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.098000 audit: BPF prog-id=15 op=LOAD Jul 15 11:38:14.098000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda7ebef90 a2=94 a3=2 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.098000 audit: BPF prog-id=15 op=UNLOAD Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit: BPF prog-id=16 op=LOAD Jul 15 11:38:14.202000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda7ebee50 a2=94 a3=1 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.202000 audit: BPF prog-id=16 op=UNLOAD Jul 15 11:38:14.202000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.202000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffda7ebef20 a2=50 a3=7ffda7ebf000 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffda7ebee60 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7ebee90 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7ebeda0 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffda7ebeeb0 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffda7ebee90 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffda7ebee80 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffda7ebeeb0 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7ebee90 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7ebeeb0 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7ebee80 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffda7ebeef0 a2=28 a3=0 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffda7ebeca0 a2=50 a3=1 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit: BPF prog-id=17 op=LOAD Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda7ebeca0 a2=94 a3=5 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit: BPF prog-id=17 op=UNLOAD Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffda7ebed50 a2=50 a3=1 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffda7ebee70 a2=4 a3=38 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.210000 audit[3717]: AVC avc: denied { confidentiality } for pid=3717 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:38:14.210000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffda7ebeec0 a2=94 a3=6 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { confidentiality } for pid=3717 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:38:14.211000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffda7ebe670 a2=94 a3=88 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.211000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { perfmon } for pid=3717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { bpf } for pid=3717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.211000 audit[3717]: AVC avc: denied { confidentiality } for pid=3717 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:38:14.211000 audit[3717]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffda7ebe670 a2=94 a3=88 items=0 ppid=3670 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.211000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit: BPF prog-id=18 op=LOAD Jul 15 11:38:14.224000 audit[3720]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb2943290 a2=98 a3=1999999999999999 items=0 ppid=3670 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.224000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 15 11:38:14.224000 audit: BPF prog-id=18 op=UNLOAD Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit: BPF prog-id=19 op=LOAD Jul 15 11:38:14.224000 audit[3720]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb2943170 a2=94 a3=ffff items=0 ppid=3670 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.224000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 15 11:38:14.224000 audit: BPF prog-id=19 op=UNLOAD Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { perfmon } for pid=3720 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit[3720]: AVC avc: denied { bpf } for pid=3720 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.224000 audit: BPF prog-id=20 op=LOAD Jul 15 11:38:14.224000 audit[3720]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb29431b0 a2=94 a3=7ffeb2943390 items=0 ppid=3670 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.224000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 15 11:38:14.224000 audit: BPF prog-id=20 op=UNLOAD Jul 15 11:38:14.338476 systemd-networkd[1089]: vxlan.calico: Link UP Jul 15 11:38:14.338483 systemd-networkd[1089]: vxlan.calico: Gained carrier Jul 15 11:38:14.389000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.389000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.389000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.389000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.389000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.389000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.389000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.389000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.389000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.389000 audit: BPF prog-id=21 op=LOAD Jul 15 11:38:14.389000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd762a8ee0 a2=98 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.389000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.389000 audit: BPF prog-id=21 op=UNLOAD Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit: BPF prog-id=22 op=LOAD Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd762a8cf0 a2=94 a3=54428f items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit: BPF prog-id=22 op=UNLOAD Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit: BPF prog-id=23 op=LOAD Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd762a8d20 a2=94 a3=2 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit: BPF prog-id=23 op=UNLOAD Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd762a8bf0 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd762a8c20 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd762a8b30 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd762a8c40 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd762a8c20 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd762a8c10 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd762a8c40 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd762a8c20 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd762a8c40 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.390000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.390000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd762a8c10 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd762a8c80 a2=28 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit: BPF prog-id=24 op=LOAD Jul 15 11:38:14.391000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd762a8af0 a2=94 a3=0 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.391000 audit: BPF prog-id=24 op=UNLOAD Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd762a8ae0 a2=50 a3=2800 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd762a8ae0 a2=50 a3=2800 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit: BPF prog-id=25 op=LOAD Jul 15 11:38:14.391000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd762a8300 a2=94 a3=2 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.391000 audit: BPF prog-id=25 op=UNLOAD Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { perfmon } for pid=3746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit[3746]: AVC avc: denied { bpf } for pid=3746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.391000 audit: BPF prog-id=26 op=LOAD Jul 15 11:38:14.391000 audit[3746]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd762a8400 a2=94 a3=30 items=0 ppid=3670 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit: BPF prog-id=27 op=LOAD Jul 15 11:38:14.394000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff12a5e8b0 a2=98 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.394000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.394000 audit: BPF prog-id=27 op=UNLOAD Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit: BPF prog-id=28 op=LOAD Jul 15 11:38:14.394000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff12a5e6a0 a2=94 a3=54428f items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.394000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.394000 audit: BPF prog-id=28 op=UNLOAD Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.394000 audit: BPF prog-id=29 op=LOAD Jul 15 11:38:14.394000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff12a5e6d0 a2=94 a3=2 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.394000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.395000 audit: BPF prog-id=29 op=UNLOAD Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit: BPF prog-id=30 op=LOAD Jul 15 11:38:14.497000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff12a5e590 a2=94 a3=1 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.497000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.497000 audit: BPF prog-id=30 op=UNLOAD Jul 15 11:38:14.497000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.497000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff12a5e660 a2=50 a3=7fff12a5e740 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.497000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff12a5e5a0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12a5e5d0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12a5e4e0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff12a5e5f0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff12a5e5d0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff12a5e5c0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff12a5e5f0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12a5e5d0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12a5e5f0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12a5e5c0 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff12a5e630 a2=28 a3=0 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff12a5e3e0 a2=50 a3=1 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit: BPF prog-id=31 op=LOAD Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff12a5e3e0 a2=94 a3=5 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit: BPF prog-id=31 op=UNLOAD Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff12a5e490 a2=50 a3=1 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff12a5e5b0 a2=4 a3=38 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.505000 audit[3755]: AVC avc: denied { confidentiality } for pid=3755 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:38:14.505000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff12a5e600 a2=94 a3=6 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.505000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { confidentiality } for pid=3755 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:38:14.506000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff12a5ddb0 a2=94 a3=88 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { perfmon } for pid=3755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { confidentiality } for pid=3755 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 15 11:38:14.506000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff12a5ddb0 a2=94 a3=88 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff12a5f7e0 a2=10 a3=208 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff12a5f680 a2=10 a3=3 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff12a5f620 a2=10 a3=3 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.506000 audit[3755]: AVC avc: denied { bpf } for pid=3755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 15 11:38:14.506000 audit[3755]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff12a5f620 a2=10 a3=7 items=0 ppid=3670 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 15 11:38:14.514000 audit: BPF prog-id=26 op=UNLOAD Jul 15 11:38:14.553000 audit[3779]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3779 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:14.553000 audit[3779]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffec97e7220 a2=0 a3=7ffec97e720c items=0 ppid=3670 pid=3779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.553000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:14.558000 audit[3778]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3778 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:14.558000 audit[3778]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe6f771b60 a2=0 a3=7ffe6f771b4c items=0 ppid=3670 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.558000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:14.563000 audit[3777]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=3777 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:14.563000 audit[3777]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffda8c0cb10 a2=0 a3=7ffda8c0cafc items=0 ppid=3670 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.563000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:14.566000 audit[3782]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=3782 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:14.566000 audit[3782]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffc537b4400 a2=0 a3=7ffc537b43ec items=0 ppid=3670 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:14.566000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:15.034604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1566028318.mount: Deactivated successfully. Jul 15 11:38:15.224567 env[1314]: time="2025-07-15T11:38:15.224503438Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:15.226525 env[1314]: time="2025-07-15T11:38:15.226463364Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:15.227889 env[1314]: time="2025-07-15T11:38:15.227848450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:15.229438 env[1314]: time="2025-07-15T11:38:15.229395890Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:15.229974 env[1314]: time="2025-07-15T11:38:15.229932530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 11:38:15.232030 env[1314]: time="2025-07-15T11:38:15.232000239Z" level=info msg="CreateContainer within sandbox \"eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 11:38:15.243697 env[1314]: time="2025-07-15T11:38:15.243657004Z" level=info msg="CreateContainer within sandbox \"eafcdb8ce792b09169d7186b3b67d024a3d764bbcff7b37c3b5ee121e613c8f3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"360ca29bfc98fc3b218b7a1d2d047798bf86af5dc36334b26cf0f1792497323b\"" Jul 15 11:38:15.244072 env[1314]: time="2025-07-15T11:38:15.244041497Z" level=info msg="StartContainer for \"360ca29bfc98fc3b218b7a1d2d047798bf86af5dc36334b26cf0f1792497323b\"" Jul 15 11:38:15.295454 env[1314]: time="2025-07-15T11:38:15.295364591Z" level=info msg="StartContainer for \"360ca29bfc98fc3b218b7a1d2d047798bf86af5dc36334b26cf0f1792497323b\" returns successfully" Jul 15 11:38:15.382642 kubelet[2105]: I0715 11:38:15.382613 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:38:15.530958 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:45578.service. Jul 15 11:38:15.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.133:22-10.0.0.1:45578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:15.532061 kernel: kauditd_printk_skb: 564 callbacks suppressed Jul 15 11:38:15.532329 kernel: audit: type=1130 audit(1752579495.529:400): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.133:22-10.0.0.1:45578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:15.566061 env[1314]: time="2025-07-15T11:38:15.564836719Z" level=info msg="StopPodSandbox for \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\"" Jul 15 11:38:15.566061 env[1314]: time="2025-07-15T11:38:15.565006048Z" level=info msg="StopPodSandbox for \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\"" Jul 15 11:38:15.566061 env[1314]: time="2025-07-15T11:38:15.565366777Z" level=info msg="StopPodSandbox for \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\"" Jul 15 11:38:15.567000 audit[3876]: USER_ACCT pid=3876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.572659 sshd[3876]: Accepted publickey for core from 10.0.0.1 port 45578 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:15.577514 kernel: audit: type=1101 audit(1752579495.567:401): pid=3876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.577542 kernel: audit: type=1103 audit(1752579495.571:402): pid=3876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.577562 kernel: audit: type=1006 audit(1752579495.571:403): pid=3876 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 15 11:38:15.571000 audit[3876]: CRED_ACQ pid=3876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.573410 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:15.571000 audit[3876]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe523a4830 a2=3 a3=0 items=0 ppid=1 pid=3876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:15.582998 kernel: audit: type=1300 audit(1752579495.571:403): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe523a4830 a2=3 a3=0 items=0 ppid=1 pid=3876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:15.583039 kernel: audit: type=1327 audit(1752579495.571:403): proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:15.571000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:15.587737 systemd[1]: Started session-9.scope. Jul 15 11:38:15.588701 systemd-logind[1296]: New session 9 of user core. Jul 15 11:38:15.601361 kernel: audit: type=1105 audit(1752579495.592:404): pid=3876 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.601411 kernel: audit: type=1103 audit(1752579495.592:405): pid=3931 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.592000 audit[3876]: USER_START pid=3876 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.592000 audit[3931]: CRED_ACQ pid=3931 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.641 [INFO][3920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.641 [INFO][3920] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" iface="eth0" netns="/var/run/netns/cni-07a95ad5-e95b-34b5-e930-b71314fe2067" Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.641 [INFO][3920] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" iface="eth0" netns="/var/run/netns/cni-07a95ad5-e95b-34b5-e930-b71314fe2067" Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.641 [INFO][3920] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" iface="eth0" netns="/var/run/netns/cni-07a95ad5-e95b-34b5-e930-b71314fe2067" Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.641 [INFO][3920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.641 [INFO][3920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.666 [INFO][3953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" HandleID="k8s-pod-network.5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.666 [INFO][3953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.666 [INFO][3953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.676 [WARNING][3953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" HandleID="k8s-pod-network.5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.676 [INFO][3953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" HandleID="k8s-pod-network.5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.677 [INFO][3953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:15.681827 env[1314]: 2025-07-15 11:38:15.679 [INFO][3920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:15.684353 systemd[1]: run-netns-cni\x2d07a95ad5\x2de95b\x2d34b5\x2de930\x2db71314fe2067.mount: Deactivated successfully. Jul 15 11:38:15.685148 env[1314]: time="2025-07-15T11:38:15.685038888Z" level=info msg="TearDown network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\" successfully" Jul 15 11:38:15.685148 env[1314]: time="2025-07-15T11:38:15.685069646Z" level=info msg="StopPodSandbox for \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\" returns successfully" Jul 15 11:38:15.685629 env[1314]: time="2025-07-15T11:38:15.685597929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548f644bc4-2cx2f,Uid:c60b401d-8be3-4942-8e09-43794a037070,Namespace:calico-apiserver,Attempt:1,}" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.613 [INFO][3911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.613 [INFO][3911] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" iface="eth0" netns="/var/run/netns/cni-2f8879b8-a6dd-e9ea-0379-da6c856ca14e" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.613 [INFO][3911] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" iface="eth0" netns="/var/run/netns/cni-2f8879b8-a6dd-e9ea-0379-da6c856ca14e" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.614 [INFO][3911] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" iface="eth0" netns="/var/run/netns/cni-2f8879b8-a6dd-e9ea-0379-da6c856ca14e" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.614 [INFO][3911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.614 [INFO][3911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.680 [INFO][3938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" HandleID="k8s-pod-network.24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.680 [INFO][3938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.680 [INFO][3938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.688 [WARNING][3938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" HandleID="k8s-pod-network.24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.688 [INFO][3938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" HandleID="k8s-pod-network.24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.689 [INFO][3938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:15.700347 env[1314]: 2025-07-15 11:38:15.692 [INFO][3911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:15.702563 systemd[1]: run-netns-cni\x2d2f8879b8\x2da6dd\x2de9ea\x2d0379\x2dda6c856ca14e.mount: Deactivated successfully. Jul 15 11:38:15.705219 env[1314]: time="2025-07-15T11:38:15.705050678Z" level=info msg="TearDown network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\" successfully" Jul 15 11:38:15.705219 env[1314]: time="2025-07-15T11:38:15.705157880Z" level=info msg="StopPodSandbox for \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\" returns successfully" Jul 15 11:38:15.706314 env[1314]: time="2025-07-15T11:38:15.705990405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b4db4784-n8kns,Uid:025841db-a9f5-430b-a1a5-f023b95f1b83,Namespace:calico-system,Attempt:1,}" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.631 [INFO][3917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.631 [INFO][3917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" iface="eth0" netns="/var/run/netns/cni-41778278-606e-fd1a-597f-96dd03c6fd06" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.632 [INFO][3917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" iface="eth0" netns="/var/run/netns/cni-41778278-606e-fd1a-597f-96dd03c6fd06" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.632 [INFO][3917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" iface="eth0" netns="/var/run/netns/cni-41778278-606e-fd1a-597f-96dd03c6fd06" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.632 [INFO][3917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.632 [INFO][3917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.697 [INFO][3945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" HandleID="k8s-pod-network.8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.697 [INFO][3945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.697 [INFO][3945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.704 [WARNING][3945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" HandleID="k8s-pod-network.8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.704 [INFO][3945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" HandleID="k8s-pod-network.8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.706 [INFO][3945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:15.710496 env[1314]: 2025-07-15 11:38:15.708 [INFO][3917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:15.710829 env[1314]: time="2025-07-15T11:38:15.710668674Z" level=info msg="TearDown network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\" successfully" Jul 15 11:38:15.710829 env[1314]: time="2025-07-15T11:38:15.710707407Z" level=info msg="StopPodSandbox for \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\" returns successfully" Jul 15 11:38:15.711119 env[1314]: time="2025-07-15T11:38:15.711082503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548f644bc4-kt62w,Uid:3f1c906d-fd33-4115-b0b0-35d63313ac89,Namespace:calico-apiserver,Attempt:1,}" Jul 15 11:38:15.977000 audit[3977]: NETFILTER_CFG table=filter:105 family=2 entries=19 op=nft_register_rule pid=3977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:15.977000 audit[3977]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe808d1fc0 a2=0 a3=7ffe808d1fac items=0 ppid=2252 pid=3977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:15.986218 kernel: audit: type=1325 audit(1752579495.977:406): table=filter:105 family=2 entries=19 op=nft_register_rule pid=3977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:15.986316 kernel: audit: type=1300 audit(1752579495.977:406): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe808d1fc0 a2=0 a3=7ffe808d1fac items=0 ppid=2252 pid=3977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:15.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:15.987000 audit[3977]: NETFILTER_CFG table=nat:106 family=2 entries=21 op=nft_register_chain pid=3977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:15.987000 audit[3977]: SYSCALL arch=c000003e syscall=46 success=yes exit=7044 a0=3 a1=7ffe808d1fc0 a2=0 a3=7ffe808d1fac items=0 ppid=2252 pid=3977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:15.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:15.991734 sshd[3876]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:15.991000 audit[3876]: USER_END pid=3876 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.991000 audit[3876]: CRED_DISP pid=3876 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:15.994518 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:45578.service: Deactivated successfully. Jul 15 11:38:15.995791 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 11:38:15.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.133:22-10.0.0.1:45578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:15.996183 systemd-logind[1296]: Session 9 logged out. Waiting for processes to exit. Jul 15 11:38:15.996832 systemd-logind[1296]: Removed session 9. Jul 15 11:38:16.090538 systemd-networkd[1089]: cali5e0618bd885: Link UP Jul 15 11:38:16.091743 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:38:16.091848 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5e0618bd885: link becomes ready Jul 15 11:38:16.094445 systemd-networkd[1089]: cali5e0618bd885: Gained carrier Jul 15 11:38:16.117266 kubelet[2105]: I0715 11:38:16.101737 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57fb689d55-qjqrd" podStartSLOduration=2.410337906 podStartE2EDuration="6.101718711s" podCreationTimestamp="2025-07-15 11:38:10 +0000 UTC" firstStartedPulling="2025-07-15 11:38:11.539430788 +0000 UTC m=+36.065950333" lastFinishedPulling="2025-07-15 11:38:15.230811592 +0000 UTC m=+39.757331138" observedRunningTime="2025-07-15 11:38:15.964201244 +0000 UTC m=+40.490720799" watchObservedRunningTime="2025-07-15 11:38:16.101718711 +0000 UTC m=+40.628238246" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.025 [INFO][3981] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0 calico-apiserver-548f644bc4- calico-apiserver 3f1c906d-fd33-4115-b0b0-35d63313ac89 977 0 2025-07-15 11:37:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548f644bc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-548f644bc4-kt62w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5e0618bd885 [] [] }} ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-kt62w" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--kt62w-" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.025 [INFO][3981] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-kt62w" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.056 [INFO][4015] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" HandleID="k8s-pod-network.4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.057 [INFO][4015] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" HandleID="k8s-pod-network.4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-548f644bc4-kt62w", "timestamp":"2025-07-15 11:38:16.056848522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.057 [INFO][4015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.057 [INFO][4015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.057 [INFO][4015] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.063 [INFO][4015] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" host="localhost" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.067 [INFO][4015] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.071 [INFO][4015] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.072 [INFO][4015] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.074 [INFO][4015] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.074 [INFO][4015] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" host="localhost" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.075 [INFO][4015] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47 Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.079 [INFO][4015] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" host="localhost" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.084 [INFO][4015] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" host="localhost" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.084 [INFO][4015] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" host="localhost" Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.084 [INFO][4015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:16.117503 env[1314]: 2025-07-15 11:38:16.084 [INFO][4015] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" HandleID="k8s-pod-network.4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:16.117963 env[1314]: 2025-07-15 11:38:16.087 [INFO][3981] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-kt62w" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0", GenerateName:"calico-apiserver-548f644bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f1c906d-fd33-4115-b0b0-35d63313ac89", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548f644bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-548f644bc4-kt62w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e0618bd885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:16.117963 env[1314]: 2025-07-15 11:38:16.087 [INFO][3981] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-kt62w" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:16.117963 env[1314]: 2025-07-15 11:38:16.087 [INFO][3981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e0618bd885 ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-kt62w" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:16.117963 env[1314]: 2025-07-15 11:38:16.091 [INFO][3981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-kt62w" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:16.117963 env[1314]: 2025-07-15 11:38:16.092 [INFO][3981] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-kt62w" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0", GenerateName:"calico-apiserver-548f644bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f1c906d-fd33-4115-b0b0-35d63313ac89", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548f644bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47", Pod:"calico-apiserver-548f644bc4-kt62w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e0618bd885", MAC:"be:db:ff:af:ae:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:16.117963 env[1314]: 2025-07-15 11:38:16.100 [INFO][3981] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-kt62w" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:16.117963 env[1314]: time="2025-07-15T11:38:16.111556130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:38:16.117963 env[1314]: time="2025-07-15T11:38:16.111613208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:38:16.117963 env[1314]: time="2025-07-15T11:38:16.111633516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:38:16.117963 env[1314]: time="2025-07-15T11:38:16.111744385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47 pid=4061 runtime=io.containerd.runc.v2 Jul 15 11:38:16.141309 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:38:16.161000 audit[4097]: NETFILTER_CFG table=filter:107 family=2 entries=50 op=nft_register_chain pid=4097 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:16.161000 audit[4097]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffe0e8905f0 a2=0 a3=7ffe0e8905dc items=0 ppid=3670 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:16.161000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:16.166294 env[1314]: time="2025-07-15T11:38:16.166254862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548f644bc4-kt62w,Uid:3f1c906d-fd33-4115-b0b0-35d63313ac89,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47\"" Jul 15 11:38:16.168255 env[1314]: time="2025-07-15T11:38:16.168209988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 11:38:16.192040 systemd-networkd[1089]: caliaf293a26690: Link UP Jul 15 11:38:16.194782 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaf293a26690: link becomes ready Jul 15 11:38:16.194493 systemd-networkd[1089]: caliaf293a26690: Gained carrier Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.043 [INFO][3995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0 calico-apiserver-548f644bc4- calico-apiserver c60b401d-8be3-4942-8e09-43794a037070 978 0 2025-07-15 11:37:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548f644bc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-548f644bc4-2cx2f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaf293a26690 [] [] }} ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-2cx2f" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.043 [INFO][3995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-2cx2f" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.074 [INFO][4031] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" HandleID="k8s-pod-network.ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.074 [INFO][4031] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" HandleID="k8s-pod-network.ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-548f644bc4-2cx2f", "timestamp":"2025-07-15 11:38:16.074057794 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.074 [INFO][4031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.084 [INFO][4031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.085 [INFO][4031] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.164 [INFO][4031] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" host="localhost" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.168 [INFO][4031] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.172 [INFO][4031] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.174 [INFO][4031] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.175 [INFO][4031] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.175 [INFO][4031] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" host="localhost" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.176 [INFO][4031] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31 Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.179 [INFO][4031] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" host="localhost" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.183 [INFO][4031] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" host="localhost" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.183 [INFO][4031] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" host="localhost" Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.183 [INFO][4031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:16.205261 env[1314]: 2025-07-15 11:38:16.183 [INFO][4031] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" HandleID="k8s-pod-network.ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:16.205819 env[1314]: 2025-07-15 11:38:16.188 [INFO][3995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-2cx2f" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0", GenerateName:"calico-apiserver-548f644bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c60b401d-8be3-4942-8e09-43794a037070", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548f644bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-548f644bc4-2cx2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf293a26690", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:16.205819 env[1314]: 2025-07-15 11:38:16.189 [INFO][3995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-2cx2f" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:16.205819 env[1314]: 2025-07-15 11:38:16.189 [INFO][3995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf293a26690 ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-2cx2f" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:16.205819 env[1314]: 2025-07-15 11:38:16.195 [INFO][3995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-2cx2f" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:16.205819 env[1314]: 2025-07-15 11:38:16.195 [INFO][3995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-2cx2f" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0", GenerateName:"calico-apiserver-548f644bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c60b401d-8be3-4942-8e09-43794a037070", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548f644bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31", Pod:"calico-apiserver-548f644bc4-2cx2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf293a26690", MAC:"42:fe:6d:d7:65:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:16.205819 env[1314]: 2025-07-15 11:38:16.203 [INFO][3995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31" Namespace="calico-apiserver" Pod="calico-apiserver-548f644bc4-2cx2f" WorkloadEndpoint="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:16.216000 audit[4114]: NETFILTER_CFG table=filter:108 family=2 entries=41 op=nft_register_chain pid=4114 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:16.216000 audit[4114]: SYSCALL arch=c000003e syscall=46 success=yes exit=23076 a0=3 a1=7ffe59841bf0 a2=0 a3=7ffe59841bdc items=0 ppid=3670 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:16.216000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:16.219358 env[1314]: time="2025-07-15T11:38:16.219281678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:38:16.219358 env[1314]: time="2025-07-15T11:38:16.219334238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:38:16.219358 env[1314]: time="2025-07-15T11:38:16.219345098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:38:16.219607 env[1314]: time="2025-07-15T11:38:16.219550304Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31 pid=4118 runtime=io.containerd.runc.v2 Jul 15 11:38:16.239538 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:38:16.260450 env[1314]: time="2025-07-15T11:38:16.260417194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548f644bc4-2cx2f,Uid:c60b401d-8be3-4942-8e09-43794a037070,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31\"" Jul 15 11:38:16.288455 systemd-networkd[1089]: cali285f2cee6f1: Link UP Jul 15 11:38:16.289477 systemd-networkd[1089]: cali285f2cee6f1: Gained carrier Jul 15 11:38:16.291285 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali285f2cee6f1: link becomes ready Jul 15 11:38:16.297882 systemd-networkd[1089]: vxlan.calico: Gained IPv6LL Jul 15 11:38:16.311000 audit[4154]: NETFILTER_CFG table=filter:109 family=2 entries=44 op=nft_register_chain pid=4154 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:16.311000 audit[4154]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff7327cdc0 a2=0 a3=7fff7327cdac items=0 ppid=3670 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:16.311000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.044 [INFO][3987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0 calico-kube-controllers-54b4db4784- calico-system 025841db-a9f5-430b-a1a5-f023b95f1b83 976 0 2025-07-15 11:37:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54b4db4784 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54b4db4784-n8kns eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali285f2cee6f1 [] [] }} ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Namespace="calico-system" Pod="calico-kube-controllers-54b4db4784-n8kns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.047 [INFO][3987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Namespace="calico-system" Pod="calico-kube-controllers-54b4db4784-n8kns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.084 [INFO][4033] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" HandleID="k8s-pod-network.2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.084 [INFO][4033] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" HandleID="k8s-pod-network.2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54b4db4784-n8kns", "timestamp":"2025-07-15 11:38:16.084074942 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.084 [INFO][4033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.184 [INFO][4033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.184 [INFO][4033] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.263 [INFO][4033] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" host="localhost" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.269 [INFO][4033] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.272 [INFO][4033] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.273 [INFO][4033] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.275 [INFO][4033] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.275 [INFO][4033] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" host="localhost" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.276 [INFO][4033] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308 Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.279 [INFO][4033] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" host="localhost" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.284 [INFO][4033] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" host="localhost" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.284 [INFO][4033] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" host="localhost" Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.284 [INFO][4033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:16.315113 env[1314]: 2025-07-15 11:38:16.284 [INFO][4033] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" HandleID="k8s-pod-network.2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:16.315888 env[1314]: 2025-07-15 11:38:16.286 [INFO][3987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Namespace="calico-system" Pod="calico-kube-controllers-54b4db4784-n8kns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0", GenerateName:"calico-kube-controllers-54b4db4784-", Namespace:"calico-system", SelfLink:"", UID:"025841db-a9f5-430b-a1a5-f023b95f1b83", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b4db4784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54b4db4784-n8kns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali285f2cee6f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:16.315888 env[1314]: 2025-07-15 11:38:16.286 [INFO][3987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Namespace="calico-system" Pod="calico-kube-controllers-54b4db4784-n8kns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:16.315888 env[1314]: 2025-07-15 11:38:16.286 [INFO][3987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali285f2cee6f1 ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Namespace="calico-system" Pod="calico-kube-controllers-54b4db4784-n8kns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:16.315888 env[1314]: 2025-07-15 11:38:16.289 [INFO][3987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Namespace="calico-system" Pod="calico-kube-controllers-54b4db4784-n8kns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:16.315888 env[1314]: 2025-07-15 11:38:16.289 [INFO][3987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Namespace="calico-system" Pod="calico-kube-controllers-54b4db4784-n8kns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0", GenerateName:"calico-kube-controllers-54b4db4784-", Namespace:"calico-system", SelfLink:"", UID:"025841db-a9f5-430b-a1a5-f023b95f1b83", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b4db4784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308", Pod:"calico-kube-controllers-54b4db4784-n8kns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali285f2cee6f1", MAC:"a2:35:ca:b8:94:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:16.315888 env[1314]: 2025-07-15 11:38:16.307 [INFO][3987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308" Namespace="calico-system" Pod="calico-kube-controllers-54b4db4784-n8kns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:16.325035 env[1314]: time="2025-07-15T11:38:16.324958344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:38:16.325115 env[1314]: time="2025-07-15T11:38:16.325032334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:38:16.325115 env[1314]: time="2025-07-15T11:38:16.325051199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:38:16.325322 env[1314]: time="2025-07-15T11:38:16.325279388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308 pid=4167 runtime=io.containerd.runc.v2 Jul 15 11:38:16.348221 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:38:16.370885 env[1314]: time="2025-07-15T11:38:16.370200603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b4db4784-n8kns,Uid:025841db-a9f5-430b-a1a5-f023b95f1b83,Namespace:calico-system,Attempt:1,} returns sandbox id \"2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308\"" Jul 15 11:38:16.384552 systemd[1]: run-netns-cni\x2d41778278\x2d606e\x2dfd1a\x2d597f\x2d96dd03c6fd06.mount: Deactivated successfully. Jul 15 11:38:16.564968 env[1314]: time="2025-07-15T11:38:16.564933018Z" level=info msg="StopPodSandbox for \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\"" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.598 [INFO][4213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.599 [INFO][4213] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" iface="eth0" netns="/var/run/netns/cni-229d18c8-b342-4ae5-d0c1-68ed89e2d730" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.599 [INFO][4213] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" iface="eth0" netns="/var/run/netns/cni-229d18c8-b342-4ae5-d0c1-68ed89e2d730" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.599 [INFO][4213] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" iface="eth0" netns="/var/run/netns/cni-229d18c8-b342-4ae5-d0c1-68ed89e2d730" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.599 [INFO][4213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.599 [INFO][4213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.616 [INFO][4222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" HandleID="k8s-pod-network.2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.616 [INFO][4222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.616 [INFO][4222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.621 [WARNING][4222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" HandleID="k8s-pod-network.2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.621 [INFO][4222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" HandleID="k8s-pod-network.2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.622 [INFO][4222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:16.625532 env[1314]: 2025-07-15 11:38:16.624 [INFO][4213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:16.625980 env[1314]: time="2025-07-15T11:38:16.625675645Z" level=info msg="TearDown network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\" successfully" Jul 15 11:38:16.625980 env[1314]: time="2025-07-15T11:38:16.625706093Z" level=info msg="StopPodSandbox for \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\" returns successfully" Jul 15 11:38:16.626034 kubelet[2105]: E0715 11:38:16.625941 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:16.626589 env[1314]: time="2025-07-15T11:38:16.626546222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbpvz,Uid:9489b1af-289a-4806-935c-bff657fb9645,Namespace:kube-system,Attempt:1,}" Jul 15 11:38:16.628199 systemd[1]: run-netns-cni\x2d229d18c8\x2db342\x2d4ae5\x2dd0c1\x2d68ed89e2d730.mount: Deactivated successfully. Jul 15 11:38:16.716889 systemd-networkd[1089]: cali7531955ee06: Link UP Jul 15 11:38:16.719433 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7531955ee06: link becomes ready Jul 15 11:38:16.718899 systemd-networkd[1089]: cali7531955ee06: Gained carrier Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.666 [INFO][4229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0 coredns-7c65d6cfc9- kube-system 9489b1af-289a-4806-935c-bff657fb9645 999 0 2025-07-15 11:37:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-sbpvz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7531955ee06 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbpvz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sbpvz-" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.666 [INFO][4229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbpvz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.687 [INFO][4243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" HandleID="k8s-pod-network.287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.688 [INFO][4243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" HandleID="k8s-pod-network.287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-sbpvz", "timestamp":"2025-07-15 11:38:16.687793098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.688 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.688 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.688 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.694 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" host="localhost" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.697 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.701 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.702 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.704 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.704 [INFO][4243] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" host="localhost" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.705 [INFO][4243] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.708 [INFO][4243] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" host="localhost" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.712 [INFO][4243] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" host="localhost" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.712 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" host="localhost" Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.712 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:16.728370 env[1314]: 2025-07-15 11:38:16.712 [INFO][4243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" HandleID="k8s-pod-network.287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.728917 env[1314]: 2025-07-15 11:38:16.714 [INFO][4229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbpvz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9489b1af-289a-4806-935c-bff657fb9645", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-sbpvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7531955ee06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:16.728917 env[1314]: 2025-07-15 11:38:16.714 [INFO][4229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbpvz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.728917 env[1314]: 2025-07-15 11:38:16.715 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7531955ee06 ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbpvz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.728917 env[1314]: 2025-07-15 11:38:16.718 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbpvz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.728917 env[1314]: 2025-07-15 11:38:16.719 [INFO][4229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbpvz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9489b1af-289a-4806-935c-bff657fb9645", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d", Pod:"coredns-7c65d6cfc9-sbpvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7531955ee06", MAC:"22:61:56:66:a5:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:16.728917 env[1314]: 2025-07-15 11:38:16.726 [INFO][4229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sbpvz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:16.741094 env[1314]: time="2025-07-15T11:38:16.740343549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:38:16.741094 env[1314]: time="2025-07-15T11:38:16.740380689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:38:16.741094 env[1314]: time="2025-07-15T11:38:16.740390458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:38:16.741094 env[1314]: time="2025-07-15T11:38:16.740541041Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d pid=4268 runtime=io.containerd.runc.v2 Jul 15 11:38:16.741000 audit[4274]: NETFILTER_CFG table=filter:110 family=2 entries=54 op=nft_register_chain pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:16.741000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=26116 a0=3 a1=7ffd8604ae10 a2=0 a3=7ffd8604adfc items=0 ppid=3670 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:16.741000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:16.762104 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:38:16.783555 env[1314]: time="2025-07-15T11:38:16.783512599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbpvz,Uid:9489b1af-289a-4806-935c-bff657fb9645,Namespace:kube-system,Attempt:1,} returns sandbox id \"287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d\"" Jul 15 11:38:16.784130 kubelet[2105]: E0715 11:38:16.784105 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:16.786776 env[1314]: time="2025-07-15T11:38:16.786740128Z" level=info msg="CreateContainer within sandbox \"287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:38:16.802567 env[1314]: time="2025-07-15T11:38:16.802526545Z" level=info msg="CreateContainer within sandbox \"287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a10dbbdb21abf8b7ba19b5b03ca9e367b07227565b0f72d2581518d9dc25ddc\"" Jul 15 11:38:16.803058 env[1314]: time="2025-07-15T11:38:16.803007269Z" level=info msg="StartContainer for \"5a10dbbdb21abf8b7ba19b5b03ca9e367b07227565b0f72d2581518d9dc25ddc\"" Jul 15 11:38:16.847767 env[1314]: time="2025-07-15T11:38:16.846126595Z" level=info msg="StartContainer for \"5a10dbbdb21abf8b7ba19b5b03ca9e367b07227565b0f72d2581518d9dc25ddc\" returns successfully" Jul 15 11:38:16.938882 kubelet[2105]: E0715 11:38:16.938592 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:16.957000 audit[4339]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=4339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:16.957000 audit[4339]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd5dfa9da0 a2=0 a3=7ffd5dfa9d8c items=0 ppid=2252 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:16.957000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:16.960703 kubelet[2105]: I0715 11:38:16.960638 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sbpvz" podStartSLOduration=36.960621444 podStartE2EDuration="36.960621444s" podCreationTimestamp="2025-07-15 11:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:38:16.949976655 +0000 UTC m=+41.476496201" watchObservedRunningTime="2025-07-15 11:38:16.960621444 +0000 UTC m=+41.487140989" Jul 15 11:38:16.965000 audit[4339]: NETFILTER_CFG table=nat:112 family=2 entries=16 op=nft_register_rule pid=4339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:16.965000 audit[4339]: SYSCALL arch=c000003e syscall=46 success=yes exit=4236 a0=3 a1=7ffd5dfa9da0 a2=0 a3=0 items=0 ppid=2252 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:16.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:16.979000 audit[4341]: NETFILTER_CFG table=filter:113 family=2 entries=15 op=nft_register_rule pid=4341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:16.979000 audit[4341]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc45205360 a2=0 a3=7ffc4520534c items=0 ppid=2252 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:16.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:16.987000 audit[4341]: NETFILTER_CFG table=nat:114 family=2 entries=37 op=nft_register_chain pid=4341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:16.987000 audit[4341]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffc45205360 a2=0 a3=7ffc4520534c items=0 ppid=2252 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:16.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:17.320174 systemd-networkd[1089]: caliaf293a26690: Gained IPv6LL Jul 15 11:38:17.565112 env[1314]: time="2025-07-15T11:38:17.565006536Z" level=info msg="StopPodSandbox for \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\"" Jul 15 11:38:17.565725 env[1314]: time="2025-07-15T11:38:17.565444339Z" level=info msg="StopPodSandbox for \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\"" Jul 15 11:38:17.565725 env[1314]: time="2025-07-15T11:38:17.565637492Z" level=info msg="StopPodSandbox for \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\"" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.887 [INFO][4381] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.887 [INFO][4381] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" iface="eth0" netns="/var/run/netns/cni-a56147c9-5a55-bf0c-6af3-caa01aecfdd4" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.888 [INFO][4381] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" iface="eth0" netns="/var/run/netns/cni-a56147c9-5a55-bf0c-6af3-caa01aecfdd4" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.888 [INFO][4381] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" iface="eth0" netns="/var/run/netns/cni-a56147c9-5a55-bf0c-6af3-caa01aecfdd4" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.888 [INFO][4381] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.888 [INFO][4381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.911 [INFO][4408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" HandleID="k8s-pod-network.4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.911 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.911 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.915 [WARNING][4408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" HandleID="k8s-pod-network.4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.916 [INFO][4408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" HandleID="k8s-pod-network.4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.916 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:17.919971 env[1314]: 2025-07-15 11:38:17.918 [INFO][4381] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:17.922976 env[1314]: time="2025-07-15T11:38:17.922941866Z" level=info msg="TearDown network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\" successfully" Jul 15 11:38:17.923064 env[1314]: time="2025-07-15T11:38:17.923042905Z" level=info msg="StopPodSandbox for \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\" returns successfully" Jul 15 11:38:17.923468 kubelet[2105]: E0715 11:38:17.923443 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:17.924590 systemd[1]: run-netns-cni\x2da56147c9\x2d5a55\x2dbf0c\x2d6af3\x2dcaa01aecfdd4.mount: Deactivated successfully. Jul 15 11:38:17.926517 env[1314]: time="2025-07-15T11:38:17.926470449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mgvww,Uid:a7f801d7-4928-4dc4-8fb8-d3b03f14ceff,Namespace:kube-system,Attempt:1,}" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.882 [INFO][4380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.882 [INFO][4380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" iface="eth0" netns="/var/run/netns/cni-88229944-0b3d-c47d-9d4a-093966c35abb" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.882 [INFO][4380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" iface="eth0" netns="/var/run/netns/cni-88229944-0b3d-c47d-9d4a-093966c35abb" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.882 [INFO][4380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" iface="eth0" netns="/var/run/netns/cni-88229944-0b3d-c47d-9d4a-093966c35abb" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.882 [INFO][4380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.882 [INFO][4380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.914 [INFO][4402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" HandleID="k8s-pod-network.43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.919 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.919 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.928 [WARNING][4402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" HandleID="k8s-pod-network.43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.929 [INFO][4402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" HandleID="k8s-pod-network.43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.930 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:17.936325 env[1314]: 2025-07-15 11:38:17.934 [INFO][4380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:17.936957 env[1314]: time="2025-07-15T11:38:17.936924605Z" level=info msg="TearDown network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\" successfully" Jul 15 11:38:17.937038 env[1314]: time="2025-07-15T11:38:17.937017760Z" level=info msg="StopPodSandbox for \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\" returns successfully" Jul 15 11:38:17.938450 env[1314]: time="2025-07-15T11:38:17.938397635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d6792,Uid:5457e7a6-68d9-4a56-8b35-b756347df804,Namespace:calico-system,Attempt:1,}" Jul 15 11:38:17.939416 systemd[1]: run-netns-cni\x2d88229944\x2d0b3d\x2dc47d\x2d9d4a\x2d093966c35abb.mount: Deactivated successfully. Jul 15 11:38:17.950422 kubelet[2105]: E0715 11:38:17.950322 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.889 [INFO][4373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.890 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" iface="eth0" netns="/var/run/netns/cni-0becc451-617a-c1ee-73e7-7cbec3a9a768" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.890 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" iface="eth0" netns="/var/run/netns/cni-0becc451-617a-c1ee-73e7-7cbec3a9a768" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.892 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" iface="eth0" netns="/var/run/netns/cni-0becc451-617a-c1ee-73e7-7cbec3a9a768" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.892 [INFO][4373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.892 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.924 [INFO][4413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" HandleID="k8s-pod-network.d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.925 [INFO][4413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.930 [INFO][4413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.935 [WARNING][4413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" HandleID="k8s-pod-network.d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.935 [INFO][4413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" HandleID="k8s-pod-network.d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.939 [INFO][4413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:17.950531 env[1314]: 2025-07-15 11:38:17.942 [INFO][4373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:17.952064 env[1314]: time="2025-07-15T11:38:17.952022060Z" level=info msg="TearDown network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\" successfully" Jul 15 11:38:17.952064 env[1314]: time="2025-07-15T11:38:17.952060733Z" level=info msg="StopPodSandbox for \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\" returns successfully" Jul 15 11:38:17.953307 env[1314]: time="2025-07-15T11:38:17.953268063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgg9,Uid:9513186e-84fa-49d1-893d-fcd495764a33,Namespace:calico-system,Attempt:1,}" Jul 15 11:38:18.085750 systemd-networkd[1089]: cali2b9891fe1c1: Link UP Jul 15 11:38:18.088696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:38:18.088848 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2b9891fe1c1: link becomes ready Jul 15 11:38:18.089310 systemd-networkd[1089]: cali2b9891fe1c1: Gained carrier Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.014 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--d6792-eth0 goldmane-58fd7646b9- calico-system 5457e7a6-68d9-4a56-8b35-b756347df804 1032 0 2025-07-15 11:37:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-d6792 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2b9891fe1c1 [] [] }} ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Namespace="calico-system" Pod="goldmane-58fd7646b9-d6792" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d6792-" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.015 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Namespace="calico-system" Pod="goldmane-58fd7646b9-d6792" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.051 [INFO][4483] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" HandleID="k8s-pod-network.fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.051 [INFO][4483] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" HandleID="k8s-pod-network.fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-d6792", "timestamp":"2025-07-15 11:38:18.051561531 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.051 [INFO][4483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.051 [INFO][4483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.052 [INFO][4483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.060 [INFO][4483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" host="localhost" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.064 [INFO][4483] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.067 [INFO][4483] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.069 [INFO][4483] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.070 [INFO][4483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.071 [INFO][4483] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" host="localhost" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.072 [INFO][4483] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.075 [INFO][4483] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" host="localhost" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.079 [INFO][4483] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" host="localhost" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.079 [INFO][4483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" host="localhost" Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.079 [INFO][4483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:18.100102 env[1314]: 2025-07-15 11:38:18.079 [INFO][4483] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" HandleID="k8s-pod-network.fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:18.101542 env[1314]: 2025-07-15 11:38:18.081 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Namespace="calico-system" Pod="goldmane-58fd7646b9-d6792" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d6792-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5457e7a6-68d9-4a56-8b35-b756347df804", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-d6792", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b9891fe1c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:18.101542 env[1314]: 2025-07-15 11:38:18.081 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Namespace="calico-system" Pod="goldmane-58fd7646b9-d6792" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:18.101542 env[1314]: 2025-07-15 11:38:18.082 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b9891fe1c1 ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Namespace="calico-system" Pod="goldmane-58fd7646b9-d6792" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:18.101542 env[1314]: 2025-07-15 11:38:18.088 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Namespace="calico-system" Pod="goldmane-58fd7646b9-d6792" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:18.101542 env[1314]: 2025-07-15 11:38:18.089 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Namespace="calico-system" Pod="goldmane-58fd7646b9-d6792" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d6792-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5457e7a6-68d9-4a56-8b35-b756347df804", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa", Pod:"goldmane-58fd7646b9-d6792", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b9891fe1c1", MAC:"52:d1:c2:c2:18:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:18.101542 env[1314]: 2025-07-15 11:38:18.096 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa" Namespace="calico-system" Pod="goldmane-58fd7646b9-d6792" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:18.111000 audit[4510]: NETFILTER_CFG table=filter:115 family=2 entries=66 op=nft_register_chain pid=4510 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:18.111000 audit[4510]: SYSCALL arch=c000003e syscall=46 success=yes exit=32784 a0=3 a1=7ffcd9622230 a2=0 a3=7ffcd962221c items=0 ppid=3670 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:18.111000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:18.115089 env[1314]: time="2025-07-15T11:38:18.114952493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:38:18.115089 env[1314]: time="2025-07-15T11:38:18.114987709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:38:18.115089 env[1314]: time="2025-07-15T11:38:18.114996896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:38:18.115919 env[1314]: time="2025-07-15T11:38:18.115303573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa pid=4517 runtime=io.containerd.runc.v2 Jul 15 11:38:18.136420 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:38:18.151967 systemd-networkd[1089]: cali5e0618bd885: Gained IPv6LL Jul 15 11:38:18.152203 systemd-networkd[1089]: cali285f2cee6f1: Gained IPv6LL Jul 15 11:38:18.159988 env[1314]: time="2025-07-15T11:38:18.159947444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d6792,Uid:5457e7a6-68d9-4a56-8b35-b756347df804,Namespace:calico-system,Attempt:1,} returns sandbox id \"fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa\"" Jul 15 11:38:18.193003 systemd-networkd[1089]: cali0f4f4da566a: Link UP Jul 15 11:38:18.194824 systemd-networkd[1089]: cali0f4f4da566a: Gained carrier Jul 15 11:38:18.196312 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0f4f4da566a: link becomes ready Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.010 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--swgg9-eth0 csi-node-driver- calico-system 9513186e-84fa-49d1-893d-fcd495764a33 1035 0 2025-07-15 11:37:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-swgg9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f4f4da566a [] [] }} ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Namespace="calico-system" Pod="csi-node-driver-swgg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--swgg9-" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.010 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Namespace="calico-system" Pod="csi-node-driver-swgg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.057 [INFO][4488] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" HandleID="k8s-pod-network.9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.057 [INFO][4488] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" HandleID="k8s-pod-network.9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-swgg9", "timestamp":"2025-07-15 11:38:18.057356184 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.057 [INFO][4488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.080 [INFO][4488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.080 [INFO][4488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.161 [INFO][4488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" host="localhost" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.168 [INFO][4488] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.171 [INFO][4488] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.172 [INFO][4488] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.174 [INFO][4488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.174 [INFO][4488] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" host="localhost" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.175 [INFO][4488] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.178 [INFO][4488] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" host="localhost" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.186 [INFO][4488] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" host="localhost" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.186 [INFO][4488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" host="localhost" Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.186 [INFO][4488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:18.210602 env[1314]: 2025-07-15 11:38:18.186 [INFO][4488] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" HandleID="k8s-pod-network.9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:18.211326 env[1314]: 2025-07-15 11:38:18.188 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Namespace="calico-system" Pod="csi-node-driver-swgg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--swgg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--swgg9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9513186e-84fa-49d1-893d-fcd495764a33", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-swgg9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f4f4da566a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:18.211326 env[1314]: 2025-07-15 11:38:18.188 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Namespace="calico-system" Pod="csi-node-driver-swgg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:18.211326 env[1314]: 2025-07-15 11:38:18.188 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f4f4da566a ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Namespace="calico-system" Pod="csi-node-driver-swgg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:18.211326 env[1314]: 2025-07-15 11:38:18.195 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Namespace="calico-system" Pod="csi-node-driver-swgg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:18.211326 env[1314]: 2025-07-15 11:38:18.199 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Namespace="calico-system" Pod="csi-node-driver-swgg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--swgg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--swgg9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9513186e-84fa-49d1-893d-fcd495764a33", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c", Pod:"csi-node-driver-swgg9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f4f4da566a", MAC:"fa:1e:85:35:41:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:18.211326 env[1314]: 2025-07-15 11:38:18.208 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c" Namespace="calico-system" Pod="csi-node-driver-swgg9" WorkloadEndpoint="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:18.220517 env[1314]: time="2025-07-15T11:38:18.220464344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:38:18.220657 env[1314]: time="2025-07-15T11:38:18.220501965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:38:18.220657 env[1314]: time="2025-07-15T11:38:18.220511583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:38:18.220657 env[1314]: time="2025-07-15T11:38:18.220629214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c pid=4565 runtime=io.containerd.runc.v2 Jul 15 11:38:18.221000 audit[4575]: NETFILTER_CFG table=filter:116 family=2 entries=52 op=nft_register_chain pid=4575 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:18.221000 audit[4575]: SYSCALL arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7ffffcd8cae0 a2=0 a3=7ffffcd8cacc items=0 ppid=3670 pid=4575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:18.221000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:18.245846 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:38:18.255827 env[1314]: time="2025-07-15T11:38:18.255779940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-swgg9,Uid:9513186e-84fa-49d1-893d-fcd495764a33,Namespace:calico-system,Attempt:1,} returns sandbox id \"9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c\"" Jul 15 11:38:18.280403 systemd-networkd[1089]: cali7531955ee06: Gained IPv6LL Jul 15 11:38:18.300540 systemd-networkd[1089]: calif2e2da62cfe: Link UP Jul 15 11:38:18.302776 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif2e2da62cfe: link becomes ready Jul 15 11:38:18.302201 systemd-networkd[1089]: calif2e2da62cfe: Gained carrier Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:17.995 [INFO][4427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0 coredns-7c65d6cfc9- kube-system a7f801d7-4928-4dc4-8fb8-d3b03f14ceff 1033 0 2025-07-15 11:37:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mgvww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2e2da62cfe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mgvww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mgvww-" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:17.995 [INFO][4427] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mgvww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.060 [INFO][4474] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" HandleID="k8s-pod-network.4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.060 [INFO][4474] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" HandleID="k8s-pod-network.4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mgvww", "timestamp":"2025-07-15 11:38:18.06079623 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.060 [INFO][4474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.186 [INFO][4474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.186 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.262 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" host="localhost" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.269 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.276 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.277 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.280 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.280 [INFO][4474] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" host="localhost" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.282 [INFO][4474] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48 Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.288 [INFO][4474] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" host="localhost" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.295 [INFO][4474] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" host="localhost" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.296 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" host="localhost" Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.296 [INFO][4474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:18.314955 env[1314]: 2025-07-15 11:38:18.296 [INFO][4474] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" HandleID="k8s-pod-network.4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:18.315879 env[1314]: 2025-07-15 11:38:18.298 [INFO][4427] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mgvww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a7f801d7-4928-4dc4-8fb8-d3b03f14ceff", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mgvww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2e2da62cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:18.315879 env[1314]: 2025-07-15 11:38:18.299 [INFO][4427] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mgvww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:18.315879 env[1314]: 2025-07-15 11:38:18.299 [INFO][4427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2e2da62cfe ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mgvww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:18.315879 env[1314]: 2025-07-15 11:38:18.302 [INFO][4427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mgvww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:18.315879 env[1314]: 2025-07-15 11:38:18.302 [INFO][4427] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mgvww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a7f801d7-4928-4dc4-8fb8-d3b03f14ceff", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48", Pod:"coredns-7c65d6cfc9-mgvww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2e2da62cfe", MAC:"02:01:8b:11:a3:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:18.315879 env[1314]: 2025-07-15 11:38:18.312 [INFO][4427] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mgvww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:18.324000 audit[4609]: NETFILTER_CFG table=filter:117 family=2 entries=52 op=nft_register_chain pid=4609 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 15 11:38:18.324000 audit[4609]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffe8d181f40 a2=0 a3=7ffe8d181f2c items=0 ppid=3670 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:18.324000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 15 11:38:18.332326 env[1314]: time="2025-07-15T11:38:18.329857862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:38:18.332326 env[1314]: time="2025-07-15T11:38:18.329902075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:38:18.332326 env[1314]: time="2025-07-15T11:38:18.329913767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:38:18.332326 env[1314]: time="2025-07-15T11:38:18.330062567Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48 pid=4618 runtime=io.containerd.runc.v2 Jul 15 11:38:18.356079 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:38:18.377436 env[1314]: time="2025-07-15T11:38:18.377391836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mgvww,Uid:a7f801d7-4928-4dc4-8fb8-d3b03f14ceff,Namespace:kube-system,Attempt:1,} returns sandbox id \"4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48\"" Jul 15 11:38:18.378966 kubelet[2105]: E0715 11:38:18.378545 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:18.380151 env[1314]: time="2025-07-15T11:38:18.380126767Z" level=info msg="CreateContainer within sandbox \"4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:38:18.384911 systemd[1]: run-netns-cni\x2d0becc451\x2d617a\x2dc1ee\x2d73e7\x2d7cbec3a9a768.mount: Deactivated successfully. Jul 15 11:38:18.394346 env[1314]: time="2025-07-15T11:38:18.394313055Z" level=info msg="CreateContainer within sandbox \"4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5755ef4a423d20d6b5b582917de872de11919472146a1e4c195320e43eb27070\"" Jul 15 11:38:18.394760 env[1314]: time="2025-07-15T11:38:18.394724187Z" level=info msg="StartContainer for \"5755ef4a423d20d6b5b582917de872de11919472146a1e4c195320e43eb27070\"" Jul 15 11:38:18.441158 env[1314]: time="2025-07-15T11:38:18.441103340Z" level=info msg="StartContainer for \"5755ef4a423d20d6b5b582917de872de11919472146a1e4c195320e43eb27070\" returns successfully" Jul 15 11:38:18.955437 kubelet[2105]: E0715 11:38:18.955364 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:18.955872 kubelet[2105]: E0715 11:38:18.955497 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:19.458272 kubelet[2105]: I0715 11:38:19.457650 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mgvww" podStartSLOduration=39.457630447 podStartE2EDuration="39.457630447s" podCreationTimestamp="2025-07-15 11:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:38:19.456626982 +0000 UTC m=+43.983146527" watchObservedRunningTime="2025-07-15 11:38:19.457630447 +0000 UTC m=+43.984149992" Jul 15 11:38:19.464000 audit[4693]: NETFILTER_CFG table=filter:118 family=2 entries=12 op=nft_register_rule pid=4693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:19.464000 audit[4693]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff0afa56f0 a2=0 a3=7fff0afa56dc items=0 ppid=2252 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:19.464000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:19.470000 audit[4693]: NETFILTER_CFG table=nat:119 family=2 entries=46 op=nft_register_rule pid=4693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:19.470000 audit[4693]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7fff0afa56f0 a2=0 a3=7fff0afa56dc items=0 ppid=2252 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:19.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:19.476321 env[1314]: time="2025-07-15T11:38:19.476266414Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:19.479430 env[1314]: time="2025-07-15T11:38:19.479393360Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:19.481679 env[1314]: time="2025-07-15T11:38:19.481653018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:19.483163 env[1314]: time="2025-07-15T11:38:19.483128711Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:19.483697 env[1314]: time="2025-07-15T11:38:19.483648878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 11:38:19.483000 audit[4695]: NETFILTER_CFG table=filter:120 family=2 entries=12 op=nft_register_rule pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:19.483000 audit[4695]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fffb37b9770 a2=0 a3=7fffb37b975c items=0 ppid=2252 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:19.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:19.485328 env[1314]: time="2025-07-15T11:38:19.484908856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 11:38:19.486215 env[1314]: time="2025-07-15T11:38:19.486179455Z" level=info msg="CreateContainer within sandbox \"4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 11:38:19.493000 audit[4695]: NETFILTER_CFG table=nat:121 family=2 entries=58 op=nft_register_chain pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:19.493000 audit[4695]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7fffb37b9770 a2=0 a3=7fffb37b975c items=0 ppid=2252 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:19.493000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:19.498528 env[1314]: time="2025-07-15T11:38:19.498500652Z" level=info msg="CreateContainer within sandbox \"4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cac246bcaf2008ac0f44d0b6b73697bce070f2ebf77e68d4690ddd0fc9c3ba94\"" Jul 15 11:38:19.498987 env[1314]: time="2025-07-15T11:38:19.498958352Z" level=info msg="StartContainer for \"cac246bcaf2008ac0f44d0b6b73697bce070f2ebf77e68d4690ddd0fc9c3ba94\"" Jul 15 11:38:19.590636 env[1314]: time="2025-07-15T11:38:19.590576676Z" level=info msg="StartContainer for \"cac246bcaf2008ac0f44d0b6b73697bce070f2ebf77e68d4690ddd0fc9c3ba94\" returns successfully" Jul 15 11:38:19.623388 systemd-networkd[1089]: cali0f4f4da566a: Gained IPv6LL Jul 15 11:38:19.811561 env[1314]: time="2025-07-15T11:38:19.811502090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:19.813487 env[1314]: time="2025-07-15T11:38:19.813447677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:19.815306 env[1314]: time="2025-07-15T11:38:19.815276965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:19.816641 env[1314]: time="2025-07-15T11:38:19.816612585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:19.817115 env[1314]: time="2025-07-15T11:38:19.817072639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 11:38:19.818212 env[1314]: time="2025-07-15T11:38:19.818180532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 11:38:19.819162 env[1314]: time="2025-07-15T11:38:19.819112794Z" level=info msg="CreateContainer within sandbox \"ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 11:38:19.834998 env[1314]: time="2025-07-15T11:38:19.834966010Z" level=info msg="CreateContainer within sandbox \"ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d3541110bc313e46f59d013b5e5cb47f70d5c5ce4ad0f8877148e06b528222f\"" Jul 15 11:38:19.835459 env[1314]: time="2025-07-15T11:38:19.835421957Z" level=info msg="StartContainer for \"2d3541110bc313e46f59d013b5e5cb47f70d5c5ce4ad0f8877148e06b528222f\"" Jul 15 11:38:19.888780 env[1314]: time="2025-07-15T11:38:19.888731561Z" level=info msg="StartContainer for \"2d3541110bc313e46f59d013b5e5cb47f70d5c5ce4ad0f8877148e06b528222f\" returns successfully" Jul 15 11:38:19.960260 kubelet[2105]: E0715 11:38:19.960217 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:19.979692 kubelet[2105]: I0715 11:38:19.979636 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-548f644bc4-kt62w" podStartSLOduration=27.662286481 podStartE2EDuration="30.979618941s" podCreationTimestamp="2025-07-15 11:37:49 +0000 UTC" firstStartedPulling="2025-07-15 11:38:16.16741295 +0000 UTC m=+40.693932485" lastFinishedPulling="2025-07-15 11:38:19.48474541 +0000 UTC m=+44.011264945" observedRunningTime="2025-07-15 11:38:19.971433457 +0000 UTC m=+44.497952992" watchObservedRunningTime="2025-07-15 11:38:19.979618941 +0000 UTC m=+44.506138486" Jul 15 11:38:20.007367 systemd-networkd[1089]: calif2e2da62cfe: Gained IPv6LL Jul 15 11:38:20.071361 systemd-networkd[1089]: cali2b9891fe1c1: Gained IPv6LL Jul 15 11:38:20.509000 audit[4783]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=4783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:20.509000 audit[4783]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff82a69c10 a2=0 a3=7fff82a69bfc items=0 ppid=2252 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:20.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:20.519000 audit[4783]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=4783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:20.519000 audit[4783]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff82a69c10 a2=0 a3=7fff82a69bfc items=0 ppid=2252 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:20.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:20.962165 kubelet[2105]: I0715 11:38:20.962127 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:38:20.962165 kubelet[2105]: I0715 11:38:20.962153 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:38:20.962591 kubelet[2105]: E0715 11:38:20.962444 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:38:20.997456 kernel: kauditd_printk_skb: 58 callbacks suppressed Jul 15 11:38:20.997548 kernel: audit: type=1130 audit(1752579500.993:428): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.133:22-10.0.0.1:34144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:20.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.133:22-10.0.0.1:34144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:20.995056 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:34144.service. Jul 15 11:38:21.034000 audit[4784]: USER_ACCT pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.041447 kernel: audit: type=1101 audit(1752579501.034:429): pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.041493 sshd[4784]: Accepted publickey for core from 10.0.0.1 port 34144 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:21.040000 audit[4784]: CRED_ACQ pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.041810 sshd[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:21.047369 kernel: audit: type=1103 audit(1752579501.040:430): pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.040000 audit[4784]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffec740930 a2=3 a3=0 items=0 ppid=1 pid=4784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:21.056202 systemd[1]: Started session-10.scope. Jul 15 11:38:21.057004 systemd-logind[1296]: New session 10 of user core. Jul 15 11:38:21.058259 kernel: audit: type=1006 audit(1752579501.040:431): pid=4784 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 15 11:38:21.058333 kernel: audit: type=1300 audit(1752579501.040:431): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffec740930 a2=3 a3=0 items=0 ppid=1 pid=4784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:21.040000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:21.060263 kernel: audit: type=1327 audit(1752579501.040:431): proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:21.063000 audit[4784]: USER_START pid=4784 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.063000 audit[4787]: CRED_ACQ pid=4787 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.084434 kernel: audit: type=1105 audit(1752579501.063:432): pid=4784 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.084508 kernel: audit: type=1103 audit(1752579501.063:433): pid=4787 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.673049 sshd[4784]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:21.672000 audit[4784]: USER_END pid=4784 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.675668 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:34144.service: Deactivated successfully. Jul 15 11:38:21.676736 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 11:38:21.677369 systemd-logind[1296]: Session 10 logged out. Waiting for processes to exit. Jul 15 11:38:21.678332 systemd-logind[1296]: Removed session 10. Jul 15 11:38:21.672000 audit[4784]: CRED_DISP pid=4784 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.682571 kernel: audit: type=1106 audit(1752579501.672:434): pid=4784 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.682641 kernel: audit: type=1104 audit(1752579501.672:435): pid=4784 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:21.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.133:22-10.0.0.1:34144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:22.809408 kubelet[2105]: I0715 11:38:22.809368 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:38:22.943816 kubelet[2105]: I0715 11:38:22.943756 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-548f644bc4-2cx2f" podStartSLOduration=30.387273683 podStartE2EDuration="33.943739801s" podCreationTimestamp="2025-07-15 11:37:49 +0000 UTC" firstStartedPulling="2025-07-15 11:38:16.261443214 +0000 UTC m=+40.787962749" lastFinishedPulling="2025-07-15 11:38:19.817909322 +0000 UTC m=+44.344428867" observedRunningTime="2025-07-15 11:38:19.980155941 +0000 UTC m=+44.506675486" watchObservedRunningTime="2025-07-15 11:38:22.943739801 +0000 UTC m=+47.470259347" Jul 15 11:38:22.948459 env[1314]: time="2025-07-15T11:38:22.948424473Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:22.951144 env[1314]: time="2025-07-15T11:38:22.951092886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:22.953300 env[1314]: time="2025-07-15T11:38:22.953273131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:22.956049 env[1314]: time="2025-07-15T11:38:22.955988693Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:22.956527 env[1314]: time="2025-07-15T11:38:22.956474095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 11:38:22.958413 env[1314]: time="2025-07-15T11:38:22.958198574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 11:38:22.960000 audit[4801]: NETFILTER_CFG table=filter:124 family=2 entries=11 op=nft_register_rule pid=4801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:22.960000 audit[4801]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe740e2810 a2=0 a3=7ffe740e27fc items=0 ppid=2252 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:22.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:22.965000 audit[4801]: NETFILTER_CFG table=nat:125 family=2 entries=29 op=nft_register_chain pid=4801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:22.965000 audit[4801]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffe740e2810 a2=0 a3=7ffe740e27fc items=0 ppid=2252 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:22.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:22.969280 env[1314]: time="2025-07-15T11:38:22.969230429Z" level=info msg="CreateContainer within sandbox \"2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 11:38:22.982383 env[1314]: time="2025-07-15T11:38:22.982352500Z" level=info msg="CreateContainer within sandbox \"2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"61a11ed755ac25e92b115d2095d1356682faa04be97e45f096002e02d7b7b8a4\"" Jul 15 11:38:22.982753 env[1314]: time="2025-07-15T11:38:22.982702989Z" level=info msg="StartContainer for \"61a11ed755ac25e92b115d2095d1356682faa04be97e45f096002e02d7b7b8a4\"" Jul 15 11:38:23.034383 env[1314]: time="2025-07-15T11:38:23.034333054Z" level=info msg="StartContainer for \"61a11ed755ac25e92b115d2095d1356682faa04be97e45f096002e02d7b7b8a4\" returns successfully" Jul 15 11:38:24.971725 kubelet[2105]: I0715 11:38:24.971685 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:38:25.866439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681961780.mount: Deactivated successfully. Jul 15 11:38:26.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.133:22-10.0.0.1:34152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:26.675520 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:34152.service. Jul 15 11:38:26.676560 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 15 11:38:26.676681 kernel: audit: type=1130 audit(1752579506.674:439): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.133:22-10.0.0.1:34152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:26.946000 audit[4856]: USER_ACCT pid=4856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:26.948039 sshd[4856]: Accepted publickey for core from 10.0.0.1 port 34152 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:26.956330 kernel: audit: type=1101 audit(1752579506.946:440): pid=4856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:26.956414 kernel: audit: type=1103 audit(1752579506.950:441): pid=4856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:26.956440 kernel: audit: type=1006 audit(1752579506.950:442): pid=4856 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jul 15 11:38:26.950000 audit[4856]: CRED_ACQ pid=4856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:26.955674 systemd-logind[1296]: New session 11 of user core. Jul 15 11:38:26.951890 sshd[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:26.956374 systemd[1]: Started session-11.scope. Jul 15 11:38:26.950000 audit[4856]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbaf828d0 a2=3 a3=0 items=0 ppid=1 pid=4856 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:26.964202 kernel: audit: type=1300 audit(1752579506.950:442): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbaf828d0 a2=3 a3=0 items=0 ppid=1 pid=4856 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:26.964272 kernel: audit: type=1327 audit(1752579506.950:442): proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:26.950000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:26.960000 audit[4856]: USER_START pid=4856 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:26.970445 kernel: audit: type=1105 audit(1752579506.960:443): pid=4856 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:26.970554 kernel: audit: type=1103 audit(1752579506.961:444): pid=4859 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:26.961000 audit[4859]: CRED_ACQ pid=4859 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:26.972323 env[1314]: time="2025-07-15T11:38:26.972288107Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:26.974271 env[1314]: time="2025-07-15T11:38:26.974205157Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:26.976056 env[1314]: time="2025-07-15T11:38:26.976021929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:26.977415 env[1314]: time="2025-07-15T11:38:26.977372354Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:26.977911 env[1314]: time="2025-07-15T11:38:26.977870459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 11:38:26.978877 env[1314]: time="2025-07-15T11:38:26.978847684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 11:38:26.980267 env[1314]: time="2025-07-15T11:38:26.979769174Z" level=info msg="CreateContainer within sandbox \"fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 11:38:26.995864 env[1314]: time="2025-07-15T11:38:26.995827358Z" level=info msg="CreateContainer within sandbox \"fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"580582c97ede4df6f164fbc46546b67f96818c266a4528e6ea207f1ec383bf5e\"" Jul 15 11:38:26.996206 env[1314]: time="2025-07-15T11:38:26.996177084Z" level=info msg="StartContainer for \"580582c97ede4df6f164fbc46546b67f96818c266a4528e6ea207f1ec383bf5e\"" Jul 15 11:38:27.055579 env[1314]: time="2025-07-15T11:38:27.055534860Z" level=info msg="StartContainer for \"580582c97ede4df6f164fbc46546b67f96818c266a4528e6ea207f1ec383bf5e\" returns successfully" Jul 15 11:38:27.227498 sshd[4856]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:27.228000 audit[4856]: USER_END pid=4856 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.230055 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:34166.service. Jul 15 11:38:27.230968 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:34152.service: Deactivated successfully. Jul 15 11:38:27.231536 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 11:38:27.232652 systemd-logind[1296]: Session 11 logged out. Waiting for processes to exit. Jul 15 11:38:27.233537 systemd-logind[1296]: Removed session 11. Jul 15 11:38:27.237427 kernel: audit: type=1106 audit(1752579507.228:445): pid=4856 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.237539 kernel: audit: type=1104 audit(1752579507.228:446): pid=4856 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.228000 audit[4856]: CRED_DISP pid=4856 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.133:22-10.0.0.1:34166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:27.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.133:22-10.0.0.1:34152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:27.263000 audit[4905]: USER_ACCT pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.264684 sshd[4905]: Accepted publickey for core from 10.0.0.1 port 34166 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:27.264000 audit[4905]: CRED_ACQ pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.264000 audit[4905]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2fa0b520 a2=3 a3=0 items=0 ppid=1 pid=4905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:27.264000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:27.265650 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:27.268708 systemd-logind[1296]: New session 12 of user core. Jul 15 11:38:27.269377 systemd[1]: Started session-12.scope. Jul 15 11:38:27.272000 audit[4905]: USER_START pid=4905 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.273000 audit[4910]: CRED_ACQ pid=4910 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.438351 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:34182.service. Jul 15 11:38:27.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.133:22-10.0.0.1:34182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:27.438694 sshd[4905]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:27.438000 audit[4905]: USER_END pid=4905 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.438000 audit[4905]: CRED_DISP pid=4905 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.441326 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:34166.service: Deactivated successfully. Jul 15 11:38:27.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.133:22-10.0.0.1:34166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:27.445005 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 11:38:27.445657 systemd-logind[1296]: Session 12 logged out. Waiting for processes to exit. Jul 15 11:38:27.446786 systemd-logind[1296]: Removed session 12. Jul 15 11:38:27.482000 audit[4918]: USER_ACCT pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.484007 sshd[4918]: Accepted publickey for core from 10.0.0.1 port 34182 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:27.483000 audit[4918]: CRED_ACQ pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.483000 audit[4918]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde41e3df0 a2=3 a3=0 items=0 ppid=1 pid=4918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:27.483000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:27.485314 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:27.489435 systemd-logind[1296]: New session 13 of user core. Jul 15 11:38:27.490209 systemd[1]: Started session-13.scope. Jul 15 11:38:27.495000 audit[4918]: USER_START pid=4918 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.496000 audit[4923]: CRED_ACQ pid=4923 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.606816 sshd[4918]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:27.606000 audit[4918]: USER_END pid=4918 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.606000 audit[4918]: CRED_DISP pid=4918 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:27.611255 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:34182.service: Deactivated successfully. Jul 15 11:38:27.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.133:22-10.0.0.1:34182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:27.612317 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 11:38:27.612803 systemd-logind[1296]: Session 13 logged out. Waiting for processes to exit. Jul 15 11:38:27.613842 systemd-logind[1296]: Removed session 13. Jul 15 11:38:28.517763 kubelet[2105]: I0715 11:38:28.517711 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54b4db4784-n8kns" podStartSLOduration=29.931283422 podStartE2EDuration="36.51769425s" podCreationTimestamp="2025-07-15 11:37:52 +0000 UTC" firstStartedPulling="2025-07-15 11:38:16.371503523 +0000 UTC m=+40.898023068" lastFinishedPulling="2025-07-15 11:38:22.95791417 +0000 UTC m=+47.484433896" observedRunningTime="2025-07-15 11:38:24.113853473 +0000 UTC m=+48.640373018" watchObservedRunningTime="2025-07-15 11:38:28.51769425 +0000 UTC m=+53.044213795" Jul 15 11:38:28.537000 audit[4959]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=4959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:28.537000 audit[4959]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc4f333c50 a2=0 a3=7ffc4f333c3c items=0 ppid=2252 pid=4959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:28.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:28.543000 audit[4959]: NETFILTER_CFG table=nat:127 family=2 entries=24 op=nft_register_rule pid=4959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:28.543000 audit[4959]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffc4f333c50 a2=0 a3=7ffc4f333c3c items=0 ppid=2252 pid=4959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:28.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:28.985866 env[1314]: time="2025-07-15T11:38:28.985317741Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:29.031810 env[1314]: time="2025-07-15T11:38:29.031762980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:29.038125 env[1314]: time="2025-07-15T11:38:29.038073506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:29.039772 env[1314]: time="2025-07-15T11:38:29.039747177Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:29.040043 env[1314]: time="2025-07-15T11:38:29.040005162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 11:38:29.042390 env[1314]: time="2025-07-15T11:38:29.042358941Z" level=info msg="CreateContainer within sandbox \"9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 11:38:29.059334 env[1314]: time="2025-07-15T11:38:29.059290624Z" level=info msg="CreateContainer within sandbox \"9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8e77c7d8e2c4651df1cf77a8c54cc5928f2502d672af3f54a110f182b2299492\"" Jul 15 11:38:29.059761 env[1314]: time="2025-07-15T11:38:29.059698059Z" level=info msg="StartContainer for \"8e77c7d8e2c4651df1cf77a8c54cc5928f2502d672af3f54a110f182b2299492\"" Jul 15 11:38:29.113304 env[1314]: time="2025-07-15T11:38:29.113227964Z" level=info msg="StartContainer for \"8e77c7d8e2c4651df1cf77a8c54cc5928f2502d672af3f54a110f182b2299492\" returns successfully" Jul 15 11:38:29.115209 env[1314]: time="2025-07-15T11:38:29.115170591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 11:38:30.599260 env[1314]: time="2025-07-15T11:38:30.599177119Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:30.601094 env[1314]: time="2025-07-15T11:38:30.601046267Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:30.602661 env[1314]: time="2025-07-15T11:38:30.602640630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:30.604099 env[1314]: time="2025-07-15T11:38:30.604064643Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:38:30.604493 env[1314]: time="2025-07-15T11:38:30.604462530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 11:38:30.606772 env[1314]: time="2025-07-15T11:38:30.606734264Z" level=info msg="CreateContainer within sandbox \"9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 11:38:30.620114 env[1314]: time="2025-07-15T11:38:30.620054416Z" level=info msg="CreateContainer within sandbox \"9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ad5a88d0b710d71b53c16b9a2b86e0ad7950b94a7a7f8e33e0767ddb0dc21301\"" Jul 15 11:38:30.622077 env[1314]: time="2025-07-15T11:38:30.620639925Z" level=info msg="StartContainer for \"ad5a88d0b710d71b53c16b9a2b86e0ad7950b94a7a7f8e33e0767ddb0dc21301\"" Jul 15 11:38:30.664863 env[1314]: time="2025-07-15T11:38:30.664819471Z" level=info msg="StartContainer for \"ad5a88d0b710d71b53c16b9a2b86e0ad7950b94a7a7f8e33e0767ddb0dc21301\" returns successfully" Jul 15 11:38:30.828062 kubelet[2105]: I0715 11:38:30.828028 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:38:30.870833 kubelet[2105]: I0715 11:38:30.870677 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-d6792" podStartSLOduration=31.053165029 podStartE2EDuration="39.870658685s" podCreationTimestamp="2025-07-15 11:37:51 +0000 UTC" firstStartedPulling="2025-07-15 11:38:18.161208385 +0000 UTC m=+42.687727930" lastFinishedPulling="2025-07-15 11:38:26.978702041 +0000 UTC m=+51.505221586" observedRunningTime="2025-07-15 11:38:28.51851469 +0000 UTC m=+53.045034225" watchObservedRunningTime="2025-07-15 11:38:30.870658685 +0000 UTC m=+55.397178230" Jul 15 11:38:30.999120 kubelet[2105]: I0715 11:38:30.999066 2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-swgg9" podStartSLOduration=26.650586092 podStartE2EDuration="38.999051781s" podCreationTimestamp="2025-07-15 11:37:52 +0000 UTC" firstStartedPulling="2025-07-15 11:38:18.2568649 +0000 UTC m=+42.783384445" lastFinishedPulling="2025-07-15 11:38:30.605330589 +0000 UTC m=+55.131850134" observedRunningTime="2025-07-15 11:38:30.998735107 +0000 UTC m=+55.525254642" watchObservedRunningTime="2025-07-15 11:38:30.999051781 +0000 UTC m=+55.525571326" Jul 15 11:38:31.666433 kubelet[2105]: I0715 11:38:31.666394 2105 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 11:38:31.666433 kubelet[2105]: I0715 11:38:31.666432 2105 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 11:38:32.610203 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:35444.service. Jul 15 11:38:32.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.133:22-10.0.0.1:35444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:32.612191 kernel: kauditd_printk_skb: 29 callbacks suppressed Jul 15 11:38:32.612259 kernel: audit: type=1130 audit(1752579512.609:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.133:22-10.0.0.1:35444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:32.645000 audit[5092]: USER_ACCT pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.647321 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 35444 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:32.648728 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:32.647000 audit[5092]: CRED_ACQ pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.652876 systemd-logind[1296]: New session 14 of user core. Jul 15 11:38:32.652959 systemd[1]: Started session-14.scope. Jul 15 11:38:32.656766 kernel: audit: type=1101 audit(1752579512.645:469): pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.656832 kernel: audit: type=1103 audit(1752579512.647:470): pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.659637 kernel: audit: type=1006 audit(1752579512.647:471): pid=5092 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 15 11:38:32.659681 kernel: audit: type=1300 audit(1752579512.647:471): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffb8a6920 a2=3 a3=0 items=0 ppid=1 pid=5092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:32.647000 audit[5092]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffb8a6920 a2=3 a3=0 items=0 ppid=1 pid=5092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:32.663792 kernel: audit: type=1327 audit(1752579512.647:471): proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:32.647000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:32.665177 kernel: audit: type=1105 audit(1752579512.656:472): pid=5092 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.656000 audit[5092]: USER_START pid=5092 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.669609 kernel: audit: type=1103 audit(1752579512.657:473): pid=5095 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.657000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.810963 sshd[5092]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:32.810000 audit[5092]: USER_END pid=5092 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.812838 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:35444.service: Deactivated successfully. Jul 15 11:38:32.813527 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 11:38:32.814280 systemd-logind[1296]: Session 14 logged out. Waiting for processes to exit. Jul 15 11:38:32.815106 systemd-logind[1296]: Removed session 14. Jul 15 11:38:32.810000 audit[5092]: CRED_DISP pid=5092 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.819879 kernel: audit: type=1106 audit(1752579512.810:474): pid=5092 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.819935 kernel: audit: type=1104 audit(1752579512.810:475): pid=5092 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:32.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.133:22-10.0.0.1:35444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:35.553432 env[1314]: time="2025-07-15T11:38:35.553335509Z" level=info msg="StopPodSandbox for \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\"" Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.600 [WARNING][5122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d6792-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5457e7a6-68d9-4a56-8b35-b756347df804", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa", Pod:"goldmane-58fd7646b9-d6792", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b9891fe1c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.601 [INFO][5122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.601 [INFO][5122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" iface="eth0" netns="" Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.601 [INFO][5122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.601 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.643 [INFO][5133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" HandleID="k8s-pod-network.43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.644 [INFO][5133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.644 [INFO][5133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.650 [WARNING][5133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" HandleID="k8s-pod-network.43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.650 [INFO][5133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" HandleID="k8s-pod-network.43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.651 [INFO][5133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:35.656230 env[1314]: 2025-07-15 11:38:35.653 [INFO][5122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:35.656711 env[1314]: time="2025-07-15T11:38:35.656237510Z" level=info msg="TearDown network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\" successfully" Jul 15 11:38:35.656711 env[1314]: time="2025-07-15T11:38:35.656275882Z" level=info msg="StopPodSandbox for \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\" returns successfully" Jul 15 11:38:35.656871 env[1314]: time="2025-07-15T11:38:35.656824572Z" level=info msg="RemovePodSandbox for \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\"" Jul 15 11:38:35.656901 env[1314]: time="2025-07-15T11:38:35.656861361Z" level=info msg="Forcibly stopping sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\"" Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.684 [WARNING][5150] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d6792-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5457e7a6-68d9-4a56-8b35-b756347df804", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fee5361c27c88c527fcb10bf643b611bb6240509b51defbc61000ff5833663fa", Pod:"goldmane-58fd7646b9-d6792", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b9891fe1c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.685 [INFO][5150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.685 [INFO][5150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" iface="eth0" netns="" Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.685 [INFO][5150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.685 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.905 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" HandleID="k8s-pod-network.43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.905 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.905 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.910 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" HandleID="k8s-pod-network.43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.910 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" HandleID="k8s-pod-network.43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Workload="localhost-k8s-goldmane--58fd7646b9--d6792-eth0" Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.912 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:35.917827 env[1314]: 2025-07-15 11:38:35.914 [INFO][5150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b" Jul 15 11:38:35.917827 env[1314]: time="2025-07-15T11:38:35.917259823Z" level=info msg="TearDown network for sandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\" successfully" Jul 15 11:38:35.987316 env[1314]: time="2025-07-15T11:38:35.987227689Z" level=info msg="RemovePodSandbox \"43458ff24891314cf12ce002dd400b9bcbbe9457ac8af22fb4dca0b1336f2e7b\" returns successfully" Jul 15 11:38:35.987907 env[1314]: time="2025-07-15T11:38:35.987868371Z" level=info msg="StopPodSandbox for \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\"" Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.020 [WARNING][5176] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9489b1af-289a-4806-935c-bff657fb9645", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d", Pod:"coredns-7c65d6cfc9-sbpvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7531955ee06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.021 [INFO][5176] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.021 [INFO][5176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" iface="eth0" netns="" Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.021 [INFO][5176] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.021 [INFO][5176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.038 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" HandleID="k8s-pod-network.2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.039 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.039 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.045 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" HandleID="k8s-pod-network.2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.045 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" HandleID="k8s-pod-network.2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.047 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.051570 env[1314]: 2025-07-15 11:38:36.049 [INFO][5176] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:36.052043 env[1314]: time="2025-07-15T11:38:36.051575453Z" level=info msg="TearDown network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\" successfully" Jul 15 11:38:36.052043 env[1314]: time="2025-07-15T11:38:36.051606070Z" level=info msg="StopPodSandbox for \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\" returns successfully" Jul 15 11:38:36.052290 env[1314]: time="2025-07-15T11:38:36.052232175Z" level=info msg="RemovePodSandbox for \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\"" Jul 15 11:38:36.052449 env[1314]: time="2025-07-15T11:38:36.052291776Z" level=info msg="Forcibly stopping sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\"" Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.081 [WARNING][5203] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9489b1af-289a-4806-935c-bff657fb9645", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"287904d3696f4ac037589a630fdd8d1e19d03b342417a81fbe1f02cd6abe652d", Pod:"coredns-7c65d6cfc9-sbpvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7531955ee06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.081 [INFO][5203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.081 [INFO][5203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" iface="eth0" netns="" Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.081 [INFO][5203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.081 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.099 [INFO][5213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" HandleID="k8s-pod-network.2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.100 [INFO][5213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.100 [INFO][5213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.105 [WARNING][5213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" HandleID="k8s-pod-network.2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.105 [INFO][5213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" HandleID="k8s-pod-network.2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Workload="localhost-k8s-coredns--7c65d6cfc9--sbpvz-eth0" Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.106 [INFO][5213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.109795 env[1314]: 2025-07-15 11:38:36.108 [INFO][5203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494" Jul 15 11:38:36.110623 env[1314]: time="2025-07-15T11:38:36.109800037Z" level=info msg="TearDown network for sandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\" successfully" Jul 15 11:38:36.113696 env[1314]: time="2025-07-15T11:38:36.113665319Z" level=info msg="RemovePodSandbox \"2699f388a158548009561343114335c18da9d6af836b269940ee2f675fc75494\" returns successfully" Jul 15 11:38:36.114260 env[1314]: time="2025-07-15T11:38:36.114198950Z" level=info msg="StopPodSandbox for \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\"" Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.143 [WARNING][5231] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0", GenerateName:"calico-apiserver-548f644bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c60b401d-8be3-4942-8e09-43794a037070", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548f644bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31", Pod:"calico-apiserver-548f644bc4-2cx2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf293a26690", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.144 [INFO][5231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.144 [INFO][5231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" iface="eth0" netns="" Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.144 [INFO][5231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.144 [INFO][5231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.161 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" HandleID="k8s-pod-network.5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.161 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.161 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.167 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" HandleID="k8s-pod-network.5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.167 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" HandleID="k8s-pod-network.5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.168 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.172753 env[1314]: 2025-07-15 11:38:36.170 [INFO][5231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:36.173502 env[1314]: time="2025-07-15T11:38:36.172738265Z" level=info msg="TearDown network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\" successfully" Jul 15 11:38:36.173502 env[1314]: time="2025-07-15T11:38:36.172770145Z" level=info msg="StopPodSandbox for \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\" returns successfully" Jul 15 11:38:36.173677 env[1314]: time="2025-07-15T11:38:36.173631180Z" level=info msg="RemovePodSandbox for \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\"" Jul 15 11:38:36.173728 env[1314]: time="2025-07-15T11:38:36.173670183Z" level=info msg="Forcibly stopping sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\"" Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.203 [WARNING][5258] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0", GenerateName:"calico-apiserver-548f644bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c60b401d-8be3-4942-8e09-43794a037070", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548f644bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddc4837cb4904bb5c4afb840e6c6d5e130aa0fd1b7eb717dcde3dd2ad2dbee31", Pod:"calico-apiserver-548f644bc4-2cx2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf293a26690", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.204 [INFO][5258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.204 [INFO][5258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" iface="eth0" netns="" Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.204 [INFO][5258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.204 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.223 [INFO][5269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" HandleID="k8s-pod-network.5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.223 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.223 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.228 [WARNING][5269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" HandleID="k8s-pod-network.5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.228 [INFO][5269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" HandleID="k8s-pod-network.5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Workload="localhost-k8s-calico--apiserver--548f644bc4--2cx2f-eth0" Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.230 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.233879 env[1314]: 2025-07-15 11:38:36.232 [INFO][5258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd" Jul 15 11:38:36.234370 env[1314]: time="2025-07-15T11:38:36.233905721Z" level=info msg="TearDown network for sandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\" successfully" Jul 15 11:38:36.237734 env[1314]: time="2025-07-15T11:38:36.237712293Z" level=info msg="RemovePodSandbox \"5fd21d5140740da1bc27cc8ef6cd9ec35c3136c8e5594ecf60ce5567521175dd\" returns successfully" Jul 15 11:38:36.238330 env[1314]: time="2025-07-15T11:38:36.238290197Z" level=info msg="StopPodSandbox for \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\"" Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.266 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0", GenerateName:"calico-apiserver-548f644bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f1c906d-fd33-4115-b0b0-35d63313ac89", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548f644bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47", Pod:"calico-apiserver-548f644bc4-kt62w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e0618bd885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.266 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.266 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" iface="eth0" netns="" Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.266 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.266 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.290 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" HandleID="k8s-pod-network.8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.290 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.290 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.296 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" HandleID="k8s-pod-network.8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.296 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" HandleID="k8s-pod-network.8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.297 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.303781 env[1314]: 2025-07-15 11:38:36.299 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:36.304237 env[1314]: time="2025-07-15T11:38:36.303809639Z" level=info msg="TearDown network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\" successfully" Jul 15 11:38:36.304237 env[1314]: time="2025-07-15T11:38:36.303839635Z" level=info msg="StopPodSandbox for \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\" returns successfully" Jul 15 11:38:36.304305 env[1314]: time="2025-07-15T11:38:36.304274650Z" level=info msg="RemovePodSandbox for \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\"" Jul 15 11:38:36.304330 env[1314]: time="2025-07-15T11:38:36.304297413Z" level=info msg="Forcibly stopping sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\"" Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.334 [WARNING][5316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0", GenerateName:"calico-apiserver-548f644bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f1c906d-fd33-4115-b0b0-35d63313ac89", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548f644bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b1b4e82cd1a72064b134406d31de2408a7d62ce383062a403dd6cf944cd1f47", Pod:"calico-apiserver-548f644bc4-kt62w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e0618bd885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.334 [INFO][5316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.334 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" iface="eth0" netns="" Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.334 [INFO][5316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.335 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.354 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" HandleID="k8s-pod-network.8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.355 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.355 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.359 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" HandleID="k8s-pod-network.8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.359 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" HandleID="k8s-pod-network.8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Workload="localhost-k8s-calico--apiserver--548f644bc4--kt62w-eth0" Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.360 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.363554 env[1314]: 2025-07-15 11:38:36.361 [INFO][5316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94" Jul 15 11:38:36.364140 env[1314]: time="2025-07-15T11:38:36.363581526Z" level=info msg="TearDown network for sandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\" successfully" Jul 15 11:38:36.367411 env[1314]: time="2025-07-15T11:38:36.367366457Z" level=info msg="RemovePodSandbox \"8bd78fa3fafc96c431607f1515b41407ef6a88f59bca46a5acec19a045215b94\" returns successfully" Jul 15 11:38:36.367903 env[1314]: time="2025-07-15T11:38:36.367855865Z" level=info msg="StopPodSandbox for \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\"" Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.395 [WARNING][5341] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0", GenerateName:"calico-kube-controllers-54b4db4784-", Namespace:"calico-system", SelfLink:"", UID:"025841db-a9f5-430b-a1a5-f023b95f1b83", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b4db4784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308", Pod:"calico-kube-controllers-54b4db4784-n8kns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali285f2cee6f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.396 [INFO][5341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.396 [INFO][5341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" iface="eth0" netns="" Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.396 [INFO][5341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.396 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.413 [INFO][5349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" HandleID="k8s-pod-network.24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.413 [INFO][5349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.413 [INFO][5349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.418 [WARNING][5349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" HandleID="k8s-pod-network.24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.418 [INFO][5349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" HandleID="k8s-pod-network.24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.419 [INFO][5349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.423125 env[1314]: 2025-07-15 11:38:36.421 [INFO][5341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:36.423125 env[1314]: time="2025-07-15T11:38:36.423086141Z" level=info msg="TearDown network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\" successfully" Jul 15 11:38:36.423125 env[1314]: time="2025-07-15T11:38:36.423119544Z" level=info msg="StopPodSandbox for \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\" returns successfully" Jul 15 11:38:36.424587 env[1314]: time="2025-07-15T11:38:36.423609423Z" level=info msg="RemovePodSandbox for \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\"" Jul 15 11:38:36.424587 env[1314]: time="2025-07-15T11:38:36.423642896Z" level=info msg="Forcibly stopping sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\"" Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.451 [WARNING][5366] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0", GenerateName:"calico-kube-controllers-54b4db4784-", Namespace:"calico-system", SelfLink:"", UID:"025841db-a9f5-430b-a1a5-f023b95f1b83", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b4db4784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2aba8351403e5083019c9aaf4d3203efa40927a479438bc5ffa2b7ce2e2b0308", Pod:"calico-kube-controllers-54b4db4784-n8kns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali285f2cee6f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.451 [INFO][5366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.451 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" iface="eth0" netns="" Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.451 [INFO][5366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.451 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.468 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" HandleID="k8s-pod-network.24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.468 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.468 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.473 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" HandleID="k8s-pod-network.24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.473 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" HandleID="k8s-pod-network.24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Workload="localhost-k8s-calico--kube--controllers--54b4db4784--n8kns-eth0" Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.475 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.478781 env[1314]: 2025-07-15 11:38:36.477 [INFO][5366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5" Jul 15 11:38:36.479394 env[1314]: time="2025-07-15T11:38:36.478805034Z" level=info msg="TearDown network for sandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\" successfully" Jul 15 11:38:36.484002 env[1314]: time="2025-07-15T11:38:36.483971338Z" level=info msg="RemovePodSandbox \"24fb5de74b45cd39b601b7850333cc8603bab0f21d694712944924145d3c3fc5\" returns successfully" Jul 15 11:38:36.484618 env[1314]: time="2025-07-15T11:38:36.484573047Z" level=info msg="StopPodSandbox for \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\"" Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.540 [WARNING][5394] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--swgg9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9513186e-84fa-49d1-893d-fcd495764a33", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c", Pod:"csi-node-driver-swgg9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f4f4da566a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.540 [INFO][5394] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.540 [INFO][5394] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" iface="eth0" netns="" Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.541 [INFO][5394] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.541 [INFO][5394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.570 [INFO][5402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" HandleID="k8s-pod-network.d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.571 [INFO][5402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.571 [INFO][5402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.576 [WARNING][5402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" HandleID="k8s-pod-network.d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.576 [INFO][5402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" HandleID="k8s-pod-network.d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.578 [INFO][5402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.581322 env[1314]: 2025-07-15 11:38:36.579 [INFO][5394] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:36.582089 env[1314]: time="2025-07-15T11:38:36.581339435Z" level=info msg="TearDown network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\" successfully" Jul 15 11:38:36.582089 env[1314]: time="2025-07-15T11:38:36.581371846Z" level=info msg="StopPodSandbox for \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\" returns successfully" Jul 15 11:38:36.582089 env[1314]: time="2025-07-15T11:38:36.581885881Z" level=info msg="RemovePodSandbox for \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\"" Jul 15 11:38:36.582089 env[1314]: time="2025-07-15T11:38:36.581926437Z" level=info msg="Forcibly stopping sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\"" Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.611 [WARNING][5420] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--swgg9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9513186e-84fa-49d1-893d-fcd495764a33", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9496c044902e0ccda42fb488a17c4cc58c5f1000ca712cdca2e30589f3226b4c", Pod:"csi-node-driver-swgg9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f4f4da566a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.611 [INFO][5420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.611 [INFO][5420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" iface="eth0" netns="" Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.611 [INFO][5420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.611 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.628 [INFO][5428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" HandleID="k8s-pod-network.d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.629 [INFO][5428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.629 [INFO][5428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.637 [WARNING][5428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" HandleID="k8s-pod-network.d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.637 [INFO][5428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" HandleID="k8s-pod-network.d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Workload="localhost-k8s-csi--node--driver--swgg9-eth0" Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.638 [INFO][5428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.642271 env[1314]: 2025-07-15 11:38:36.640 [INFO][5420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f" Jul 15 11:38:36.642872 env[1314]: time="2025-07-15T11:38:36.642291377Z" level=info msg="TearDown network for sandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\" successfully" Jul 15 11:38:36.650433 env[1314]: time="2025-07-15T11:38:36.650402576Z" level=info msg="RemovePodSandbox \"d7dce04247bf8bcad843a5e8d28d807766313a988825f2a66c108ad40c9d389f\" returns successfully" Jul 15 11:38:36.650944 env[1314]: time="2025-07-15T11:38:36.650900200Z" level=info msg="StopPodSandbox for \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\"" Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.677 [WARNING][5446] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" WorkloadEndpoint="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.677 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.677 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" iface="eth0" netns="" Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.677 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.677 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.693 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" HandleID="k8s-pod-network.ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Workload="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.693 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.693 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.699 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" HandleID="k8s-pod-network.ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Workload="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.699 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" HandleID="k8s-pod-network.ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Workload="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.700 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.703935 env[1314]: 2025-07-15 11:38:36.702 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:36.703935 env[1314]: time="2025-07-15T11:38:36.703895221Z" level=info msg="TearDown network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\" successfully" Jul 15 11:38:36.703935 env[1314]: time="2025-07-15T11:38:36.703925528Z" level=info msg="StopPodSandbox for \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\" returns successfully" Jul 15 11:38:36.705342 env[1314]: time="2025-07-15T11:38:36.705308724Z" level=info msg="RemovePodSandbox for \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\"" Jul 15 11:38:36.705420 env[1314]: time="2025-07-15T11:38:36.705348098Z" level=info msg="Forcibly stopping sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\"" Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.732 [WARNING][5473] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" WorkloadEndpoint="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.732 [INFO][5473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.732 [INFO][5473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" iface="eth0" netns="" Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.732 [INFO][5473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.732 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.748 [INFO][5481] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" HandleID="k8s-pod-network.ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Workload="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.748 [INFO][5481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.749 [INFO][5481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.757 [WARNING][5481] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" HandleID="k8s-pod-network.ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Workload="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.757 [INFO][5481] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" HandleID="k8s-pod-network.ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Workload="localhost-k8s-whisker--848d5c4469--g74sb-eth0" Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.759 [INFO][5481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.763186 env[1314]: 2025-07-15 11:38:36.761 [INFO][5473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45" Jul 15 11:38:36.763600 env[1314]: time="2025-07-15T11:38:36.763201956Z" level=info msg="TearDown network for sandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\" successfully" Jul 15 11:38:36.766543 env[1314]: time="2025-07-15T11:38:36.766497739Z" level=info msg="RemovePodSandbox \"ce5d530c0da487af647e0499fdc85e6bab65f4debd329d13a13bb5b1b4703d45\" returns successfully" Jul 15 11:38:36.767057 env[1314]: time="2025-07-15T11:38:36.767023175Z" level=info msg="StopPodSandbox for \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\"" Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.796 [WARNING][5499] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a7f801d7-4928-4dc4-8fb8-d3b03f14ceff", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48", Pod:"coredns-7c65d6cfc9-mgvww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2e2da62cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.797 [INFO][5499] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.797 [INFO][5499] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" iface="eth0" netns="" Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.797 [INFO][5499] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.797 [INFO][5499] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.814 [INFO][5507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" HandleID="k8s-pod-network.4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.814 [INFO][5507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.814 [INFO][5507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.819 [WARNING][5507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" HandleID="k8s-pod-network.4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.819 [INFO][5507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" HandleID="k8s-pod-network.4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.821 [INFO][5507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.824654 env[1314]: 2025-07-15 11:38:36.823 [INFO][5499] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:36.825086 env[1314]: time="2025-07-15T11:38:36.824680525Z" level=info msg="TearDown network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\" successfully" Jul 15 11:38:36.825086 env[1314]: time="2025-07-15T11:38:36.824713838Z" level=info msg="StopPodSandbox for \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\" returns successfully" Jul 15 11:38:36.825239 env[1314]: time="2025-07-15T11:38:36.825208967Z" level=info msg="RemovePodSandbox for \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\"" Jul 15 11:38:36.825315 env[1314]: time="2025-07-15T11:38:36.825259241Z" level=info msg="Forcibly stopping sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\"" Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.851 [WARNING][5524] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a7f801d7-4928-4dc4-8fb8-d3b03f14ceff", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 11, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4622b2f270d9a08d032d518428cfb1dd535335f9c3320d278ec1afa4c6515e48", Pod:"coredns-7c65d6cfc9-mgvww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2e2da62cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.852 [INFO][5524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.852 [INFO][5524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" iface="eth0" netns="" Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.852 [INFO][5524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.852 [INFO][5524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.875 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" HandleID="k8s-pod-network.4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.875 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.875 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.880 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" HandleID="k8s-pod-network.4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.880 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" HandleID="k8s-pod-network.4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Workload="localhost-k8s-coredns--7c65d6cfc9--mgvww-eth0" Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.881 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 11:38:36.884740 env[1314]: 2025-07-15 11:38:36.883 [INFO][5524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78" Jul 15 11:38:36.885194 env[1314]: time="2025-07-15T11:38:36.884768105Z" level=info msg="TearDown network for sandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\" successfully" Jul 15 11:38:36.888493 env[1314]: time="2025-07-15T11:38:36.888463950Z" level=info msg="RemovePodSandbox \"4bb92833f8151ca9362979f39b4eef2f14ea328b9168f7d9c6ec7e98634c7d78\" returns successfully" Jul 15 11:38:37.814156 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:35456.service. Jul 15 11:38:37.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.133:22-10.0.0.1:35456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:37.815531 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:38:37.815607 kernel: audit: type=1130 audit(1752579517.813:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.133:22-10.0.0.1:35456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:37.851000 audit[5540]: USER_ACCT pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:37.852425 sshd[5540]: Accepted publickey for core from 10.0.0.1 port 35456 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:37.855741 sshd[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:37.854000 audit[5540]: CRED_ACQ pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:37.859533 kernel: audit: type=1101 audit(1752579517.851:478): pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:37.859589 kernel: audit: type=1103 audit(1752579517.854:479): pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:37.859624 kernel: audit: type=1006 audit(1752579517.854:480): pid=5540 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 15 11:38:37.859277 systemd-logind[1296]: New session 15 of user core. Jul 15 11:38:37.860074 systemd[1]: Started session-15.scope. Jul 15 11:38:37.854000 audit[5540]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd01a46260 a2=3 a3=0 items=0 ppid=1 pid=5540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:37.865272 kernel: audit: type=1300 audit(1752579517.854:480): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd01a46260 a2=3 a3=0 items=0 ppid=1 pid=5540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:37.854000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:37.866575 kernel: audit: type=1327 audit(1752579517.854:480): proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:37.866625 kernel: audit: type=1105 audit(1752579517.862:481): pid=5540 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:37.862000 audit[5540]: USER_START pid=5540 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:37.863000 audit[5543]: CRED_ACQ pid=5543 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:37.886786 kernel: audit: type=1103 audit(1752579517.863:482): pid=5543 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:38.076813 sshd[5540]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:38.076000 audit[5540]: USER_END pid=5540 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:38.078841 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:35456.service: Deactivated successfully. Jul 15 11:38:38.079734 systemd-logind[1296]: Session 15 logged out. Waiting for processes to exit. Jul 15 11:38:38.079852 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 11:38:38.080839 systemd-logind[1296]: Removed session 15. Jul 15 11:38:38.076000 audit[5540]: CRED_DISP pid=5540 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:38.085262 kernel: audit: type=1106 audit(1752579518.076:483): pid=5540 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:38.085306 kernel: audit: type=1104 audit(1752579518.076:484): pid=5540 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:38.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.133:22-10.0.0.1:35456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:43.079790 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:58050.service. Jul 15 11:38:43.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.133:22-10.0.0.1:58050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:43.080963 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:38:43.081001 kernel: audit: type=1130 audit(1752579523.078:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.133:22-10.0.0.1:58050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:43.114000 audit[5578]: USER_ACCT pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.115685 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 58050 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:43.117627 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:43.119264 kernel: audit: type=1101 audit(1752579523.114:487): pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.116000 audit[5578]: CRED_ACQ pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.121126 systemd-logind[1296]: New session 16 of user core. Jul 15 11:38:43.121796 systemd[1]: Started session-16.scope. Jul 15 11:38:43.125078 kernel: audit: type=1103 audit(1752579523.116:488): pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.125125 kernel: audit: type=1006 audit(1752579523.116:489): pid=5578 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 15 11:38:43.125160 kernel: audit: type=1300 audit(1752579523.116:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb4620110 a2=3 a3=0 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:43.116000 audit[5578]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb4620110 a2=3 a3=0 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:43.128830 kernel: audit: type=1327 audit(1752579523.116:489): proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:43.116000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:43.130070 kernel: audit: type=1105 audit(1752579523.124:490): pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.124000 audit[5578]: USER_START pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.125000 audit[5581]: CRED_ACQ pid=5581 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.138449 kernel: audit: type=1103 audit(1752579523.125:491): pid=5581 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.253955 sshd[5578]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:43.253000 audit[5578]: USER_END pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.256373 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:58050.service: Deactivated successfully. Jul 15 11:38:43.257223 systemd-logind[1296]: Session 16 logged out. Waiting for processes to exit. Jul 15 11:38:43.257309 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 11:38:43.258134 systemd-logind[1296]: Removed session 16. Jul 15 11:38:43.259282 kernel: audit: type=1106 audit(1752579523.253:492): pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.253000 audit[5578]: CRED_DISP pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.133:22-10.0.0.1:58050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:43.263274 kernel: audit: type=1104 audit(1752579523.253:493): pid=5578 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:43.277761 kubelet[2105]: I0715 11:38:43.277719 2105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:38:43.305000 audit[5593]: NETFILTER_CFG table=filter:128 family=2 entries=10 op=nft_register_rule pid=5593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:43.305000 audit[5593]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc56ea2580 a2=0 a3=7ffc56ea256c items=0 ppid=2252 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:43.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:43.310000 audit[5593]: NETFILTER_CFG table=nat:129 family=2 entries=36 op=nft_register_chain pid=5593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:43.310000 audit[5593]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffc56ea2580 a2=0 a3=7ffc56ea256c items=0 ppid=2252 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:43.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:45.402711 systemd[1]: run-containerd-runc-k8s.io-e1bb48846b13258b703a30c865fdab3841038e8a0ac55904fc31ee8040839df7-runc.oqfEaR.mount: Deactivated successfully. Jul 15 11:38:48.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.133:22-10.0.0.1:34118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:48.256742 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:34118.service. Jul 15 11:38:48.257754 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 15 11:38:48.257861 kernel: audit: type=1130 audit(1752579528.255:497): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.133:22-10.0.0.1:34118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:48.293000 audit[5616]: USER_ACCT pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.294512 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 34118 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:48.297971 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:48.296000 audit[5616]: CRED_ACQ pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.302236 kernel: audit: type=1101 audit(1752579528.293:498): pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.302301 kernel: audit: type=1103 audit(1752579528.296:499): pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.302321 kernel: audit: type=1006 audit(1752579528.296:500): pid=5616 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 15 11:38:48.302493 systemd-logind[1296]: New session 17 of user core. Jul 15 11:38:48.303299 systemd[1]: Started session-17.scope. Jul 15 11:38:48.304410 kernel: audit: type=1300 audit(1752579528.296:500): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1a1eadb0 a2=3 a3=0 items=0 ppid=1 pid=5616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:48.296000 audit[5616]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1a1eadb0 a2=3 a3=0 items=0 ppid=1 pid=5616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:48.296000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:48.309299 kernel: audit: type=1327 audit(1752579528.296:500): proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:48.309355 kernel: audit: type=1105 audit(1752579528.307:501): pid=5616 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.307000 audit[5616]: USER_START pid=5616 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.313349 kernel: audit: type=1103 audit(1752579528.308:502): pid=5619 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.308000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.441996 sshd[5616]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:48.441000 audit[5616]: USER_END pid=5616 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.441000 audit[5616]: CRED_DISP pid=5616 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.444332 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:34122.service. Jul 15 11:38:48.444960 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:34118.service: Deactivated successfully. Jul 15 11:38:48.445784 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 11:38:48.450822 kernel: audit: type=1106 audit(1752579528.441:503): pid=5616 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.450934 kernel: audit: type=1104 audit(1752579528.441:504): pid=5616 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.133:22-10.0.0.1:34122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:48.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.133:22-10.0.0.1:34118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:48.451330 systemd-logind[1296]: Session 17 logged out. Waiting for processes to exit. Jul 15 11:38:48.452097 systemd-logind[1296]: Removed session 17. Jul 15 11:38:48.477000 audit[5628]: USER_ACCT pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.478518 sshd[5628]: Accepted publickey for core from 10.0.0.1 port 34122 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:48.477000 audit[5628]: CRED_ACQ pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.477000 audit[5628]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9f176590 a2=3 a3=0 items=0 ppid=1 pid=5628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:48.477000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:48.479427 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:48.483368 systemd-logind[1296]: New session 18 of user core. Jul 15 11:38:48.483710 systemd[1]: Started session-18.scope. Jul 15 11:38:48.487000 audit[5628]: USER_START pid=5628 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.488000 audit[5633]: CRED_ACQ pid=5633 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.669437 sshd[5628]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:48.669000 audit[5628]: USER_END pid=5628 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.669000 audit[5628]: CRED_DISP pid=5628 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.671806 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:34128.service. Jul 15 11:38:48.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.133:22-10.0.0.1:34128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:48.672309 systemd-logind[1296]: Session 18 logged out. Waiting for processes to exit. Jul 15 11:38:48.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.133:22-10.0.0.1:34122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:48.672917 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:34122.service: Deactivated successfully. Jul 15 11:38:48.673807 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 11:38:48.674234 systemd-logind[1296]: Removed session 18. Jul 15 11:38:48.706000 audit[5640]: USER_ACCT pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.708328 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 34128 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:48.707000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.707000 audit[5640]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff3fd4f40 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:48.707000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:48.709186 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:48.712440 systemd-logind[1296]: New session 19 of user core. Jul 15 11:38:48.713155 systemd[1]: Started session-19.scope. Jul 15 11:38:48.716000 audit[5640]: USER_START pid=5640 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:48.717000 audit[5645]: CRED_ACQ pid=5645 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:49.400000 audit[5677]: NETFILTER_CFG table=filter:130 family=2 entries=9 op=nft_register_rule pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:49.400000 audit[5677]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe0f382060 a2=0 a3=7ffe0f38204c items=0 ppid=2252 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:49.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:49.405000 audit[5677]: NETFILTER_CFG table=nat:131 family=2 entries=31 op=nft_register_chain pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:49.405000 audit[5677]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffe0f382060 a2=0 a3=7ffe0f38204c items=0 ppid=2252 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:49.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:50.226000 audit[5679]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:50.226000 audit[5679]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff3b8ab240 a2=0 a3=7fff3b8ab22c items=0 ppid=2252 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:50.226000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:50.233000 audit[5679]: NETFILTER_CFG table=nat:133 family=2 entries=26 op=nft_register_rule pid=5679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:50.233000 audit[5679]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fff3b8ab240 a2=0 a3=0 items=0 ppid=2252 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:50.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:50.238169 sshd[5640]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:50.239845 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:34142.service. Jul 15 11:38:50.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.133:22-10.0.0.1:34142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:50.241000 audit[5640]: USER_END pid=5640 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.241000 audit[5640]: CRED_DISP pid=5640 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.243200 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:34128.service: Deactivated successfully. Jul 15 11:38:50.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.133:22-10.0.0.1:34128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:50.244364 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 11:38:50.244820 systemd-logind[1296]: Session 19 logged out. Waiting for processes to exit. Jul 15 11:38:50.245671 systemd-logind[1296]: Removed session 19. Jul 15 11:38:50.278000 audit[5680]: USER_ACCT pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.278817 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 34142 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:50.279000 audit[5680]: CRED_ACQ pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.279000 audit[5680]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeabb93f10 a2=3 a3=0 items=0 ppid=1 pid=5680 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:50.279000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:50.279723 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:50.283405 systemd-logind[1296]: New session 20 of user core. Jul 15 11:38:50.284100 systemd[1]: Started session-20.scope. Jul 15 11:38:50.287000 audit[5680]: USER_START pid=5680 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.289000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.702278 sshd[5680]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:50.704789 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:34146.service. Jul 15 11:38:50.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.133:22-10.0.0.1:34146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:50.705000 audit[5680]: USER_END pid=5680 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.706000 audit[5680]: CRED_DISP pid=5680 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.708104 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:34142.service: Deactivated successfully. Jul 15 11:38:50.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.133:22-10.0.0.1:34142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:50.708824 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 11:38:50.710193 systemd-logind[1296]: Session 20 logged out. Waiting for processes to exit. Jul 15 11:38:50.711026 systemd-logind[1296]: Removed session 20. Jul 15 11:38:50.738000 audit[5692]: USER_ACCT pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.739089 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 34146 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:50.739000 audit[5692]: CRED_ACQ pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.739000 audit[5692]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd00042890 a2=3 a3=0 items=0 ppid=1 pid=5692 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:50.739000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:50.740087 sshd[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:50.743318 systemd-logind[1296]: New session 21 of user core. Jul 15 11:38:50.744035 systemd[1]: Started session-21.scope. Jul 15 11:38:50.748000 audit[5692]: USER_START pid=5692 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.749000 audit[5697]: CRED_ACQ pid=5697 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.845433 sshd[5692]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:50.846000 audit[5692]: USER_END pid=5692 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.846000 audit[5692]: CRED_DISP pid=5692 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:50.847861 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:34146.service: Deactivated successfully. Jul 15 11:38:50.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.133:22-10.0.0.1:34146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:50.848881 systemd-logind[1296]: Session 21 logged out. Waiting for processes to exit. Jul 15 11:38:50.848937 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 11:38:50.849767 systemd-logind[1296]: Removed session 21. Jul 15 11:38:51.246000 audit[5709]: NETFILTER_CFG table=filter:134 family=2 entries=32 op=nft_register_rule pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:51.246000 audit[5709]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffcfadfc550 a2=0 a3=7ffcfadfc53c items=0 ppid=2252 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:51.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:51.251000 audit[5709]: NETFILTER_CFG table=nat:135 family=2 entries=26 op=nft_register_rule pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:51.251000 audit[5709]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffcfadfc550 a2=0 a3=0 items=0 ppid=2252 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:51.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:55.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.133:22-10.0.0.1:34160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:55.848881 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:34160.service. Jul 15 11:38:55.857786 kernel: kauditd_printk_skb: 63 callbacks suppressed Jul 15 11:38:55.857837 kernel: audit: type=1130 audit(1752579535.847:548): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.133:22-10.0.0.1:34160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:55.892000 audit[5716]: USER_ACCT pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:55.894018 sshd[5716]: Accepted publickey for core from 10.0.0.1 port 34160 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:38:55.895678 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:38:55.894000 audit[5716]: CRED_ACQ pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:55.899229 systemd-logind[1296]: New session 22 of user core. Jul 15 11:38:55.899936 systemd[1]: Started session-22.scope. Jul 15 11:38:55.901076 kernel: audit: type=1101 audit(1752579535.892:549): pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:55.901195 kernel: audit: type=1103 audit(1752579535.894:550): pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:55.894000 audit[5716]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdebe8f10 a2=3 a3=0 items=0 ppid=1 pid=5716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:55.907456 kernel: audit: type=1006 audit(1752579535.894:551): pid=5716 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 15 11:38:55.907517 kernel: audit: type=1300 audit(1752579535.894:551): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdebe8f10 a2=3 a3=0 items=0 ppid=1 pid=5716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:55.907538 kernel: audit: type=1327 audit(1752579535.894:551): proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:55.894000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:38:55.902000 audit[5716]: USER_START pid=5716 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:55.912764 kernel: audit: type=1105 audit(1752579535.902:552): pid=5716 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:55.912805 kernel: audit: type=1103 audit(1752579535.904:553): pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:55.904000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:56.034682 sshd[5716]: pam_unix(sshd:session): session closed for user core Jul 15 11:38:56.034000 audit[5716]: USER_END pid=5716 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:56.037075 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:34160.service: Deactivated successfully. Jul 15 11:38:56.037821 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 11:38:56.038636 systemd-logind[1296]: Session 22 logged out. Waiting for processes to exit. Jul 15 11:38:56.039344 systemd-logind[1296]: Removed session 22. Jul 15 11:38:56.034000 audit[5716]: CRED_DISP pid=5716 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:56.043090 kernel: audit: type=1106 audit(1752579536.034:554): pid=5716 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:56.043140 kernel: audit: type=1104 audit(1752579536.034:555): pid=5716 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:38:56.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.133:22-10.0.0.1:34160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:38:56.402000 audit[5731]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:56.402000 audit[5731]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffed4640ac0 a2=0 a3=7ffed4640aac items=0 ppid=2252 pid=5731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:56.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:38:56.415000 audit[5731]: NETFILTER_CFG table=nat:137 family=2 entries=110 op=nft_register_chain pid=5731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 15 11:38:56.415000 audit[5731]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffed4640ac0 a2=0 a3=7ffed4640aac items=0 ppid=2252 pid=5731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:38:56.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 15 11:39:01.038100 systemd[1]: Started sshd@22-10.0.0.133:22-10.0.0.1:37122.service. Jul 15 11:39:01.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.133:22-10.0.0.1:37122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:39:01.039371 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 15 11:39:01.039421 kernel: audit: type=1130 audit(1752579541.037:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.133:22-10.0.0.1:37122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:39:01.071000 audit[5755]: USER_ACCT pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.072882 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 37122 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:39:01.075000 audit[5755]: CRED_ACQ pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.077086 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:39:01.080663 kernel: audit: type=1101 audit(1752579541.071:560): pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.080714 kernel: audit: type=1103 audit(1752579541.075:561): pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.080733 kernel: audit: type=1006 audit(1752579541.075:562): pid=5755 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 15 11:39:01.080202 systemd-logind[1296]: New session 23 of user core. Jul 15 11:39:01.080873 systemd[1]: Started session-23.scope. Jul 15 11:39:01.075000 audit[5755]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcd2ba4f0 a2=3 a3=0 items=0 ppid=1 pid=5755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:39:01.087720 kernel: audit: type=1300 audit(1752579541.075:562): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcd2ba4f0 a2=3 a3=0 items=0 ppid=1 pid=5755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:39:01.087761 kernel: audit: type=1327 audit(1752579541.075:562): proctitle=737368643A20636F7265205B707269765D Jul 15 11:39:01.075000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:39:01.084000 audit[5755]: USER_START pid=5755 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.093052 kernel: audit: type=1105 audit(1752579541.084:563): pid=5755 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.093096 kernel: audit: type=1103 audit(1752579541.085:564): pid=5758 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.085000 audit[5758]: CRED_ACQ pid=5758 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.193386 sshd[5755]: pam_unix(sshd:session): session closed for user core Jul 15 11:39:01.193000 audit[5755]: USER_END pid=5755 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.195943 systemd[1]: sshd@22-10.0.0.133:22-10.0.0.1:37122.service: Deactivated successfully. Jul 15 11:39:01.196879 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 11:39:01.193000 audit[5755]: CRED_DISP pid=5755 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.200924 systemd-logind[1296]: Session 23 logged out. Waiting for processes to exit. Jul 15 11:39:01.201677 systemd-logind[1296]: Removed session 23. Jul 15 11:39:01.202715 kernel: audit: type=1106 audit(1752579541.193:565): pid=5755 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.202768 kernel: audit: type=1104 audit(1752579541.193:566): pid=5755 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:01.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.133:22-10.0.0.1:37122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:39:01.567710 kubelet[2105]: E0715 11:39:01.567673 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:39:04.570099 kubelet[2105]: E0715 11:39:04.570023 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:39:06.196778 systemd[1]: Started sshd@23-10.0.0.133:22-10.0.0.1:37132.service. Jul 15 11:39:06.199973 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:39:06.200031 kernel: audit: type=1130 audit(1752579546.195:568): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.133:22-10.0.0.1:37132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:39:06.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.133:22-10.0.0.1:37132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:39:06.232000 audit[5791]: USER_ACCT pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.233522 sshd[5791]: Accepted publickey for core from 10.0.0.1 port 37132 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:39:06.235438 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:39:06.233000 audit[5791]: CRED_ACQ pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.238750 systemd-logind[1296]: New session 24 of user core. Jul 15 11:39:06.239453 systemd[1]: Started session-24.scope. Jul 15 11:39:06.242265 kernel: audit: type=1101 audit(1752579546.232:569): pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.242326 kernel: audit: type=1103 audit(1752579546.233:570): pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.242343 kernel: audit: type=1006 audit(1752579546.233:571): pid=5791 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 15 11:39:06.243915 kernel: audit: type=1300 audit(1752579546.233:571): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4226ffc0 a2=3 a3=0 items=0 ppid=1 pid=5791 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:39:06.233000 audit[5791]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4226ffc0 a2=3 a3=0 items=0 ppid=1 pid=5791 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:39:06.248011 kernel: audit: type=1327 audit(1752579546.233:571): proctitle=737368643A20636F7265205B707269765D Jul 15 11:39:06.233000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:39:06.249361 kernel: audit: type=1105 audit(1752579546.242:572): pid=5791 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.242000 audit[5791]: USER_START pid=5791 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.253700 kernel: audit: type=1103 audit(1752579546.243:573): pid=5794 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.243000 audit[5794]: CRED_ACQ pid=5794 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.340163 sshd[5791]: pam_unix(sshd:session): session closed for user core Jul 15 11:39:06.339000 audit[5791]: USER_END pid=5791 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.342535 systemd[1]: sshd@23-10.0.0.133:22-10.0.0.1:37132.service: Deactivated successfully. Jul 15 11:39:06.343603 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 11:39:06.344207 systemd-logind[1296]: Session 24 logged out. Waiting for processes to exit. Jul 15 11:39:06.344985 systemd-logind[1296]: Removed session 24. Jul 15 11:39:06.339000 audit[5791]: CRED_DISP pid=5791 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.348911 kernel: audit: type=1106 audit(1752579546.339:574): pid=5791 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.349089 kernel: audit: type=1104 audit(1752579546.339:575): pid=5791 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:06.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.133:22-10.0.0.1:37132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:39:08.564890 kubelet[2105]: E0715 11:39:08.564851 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:39:11.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.133:22-10.0.0.1:57980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:39:11.343352 systemd[1]: Started sshd@24-10.0.0.133:22-10.0.0.1:57980.service. Jul 15 11:39:11.344508 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 15 11:39:11.344564 kernel: audit: type=1130 audit(1752579551.342:577): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.133:22-10.0.0.1:57980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:39:11.379000 audit[5806]: USER_ACCT pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.380503 sshd[5806]: Accepted publickey for core from 10.0.0.1 port 57980 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:39:11.383215 sshd[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:39:11.381000 audit[5806]: CRED_ACQ pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.386763 systemd-logind[1296]: New session 25 of user core. Jul 15 11:39:11.387539 kernel: audit: type=1101 audit(1752579551.379:578): pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.387596 kernel: audit: type=1103 audit(1752579551.381:579): pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.387614 kernel: audit: type=1006 audit(1752579551.381:580): pid=5806 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 15 11:39:11.387512 systemd[1]: Started session-25.scope. Jul 15 11:39:11.389774 kernel: audit: type=1300 audit(1752579551.381:580): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2d653790 a2=3 a3=0 items=0 ppid=1 pid=5806 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:39:11.381000 audit[5806]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2d653790 a2=3 a3=0 items=0 ppid=1 pid=5806 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:39:11.381000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 15 11:39:11.394793 kernel: audit: type=1327 audit(1752579551.381:580): proctitle=737368643A20636F7265205B707269765D Jul 15 11:39:11.394817 kernel: audit: type=1105 audit(1752579551.390:581): pid=5806 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.390000 audit[5806]: USER_START pid=5806 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.392000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.402056 kernel: audit: type=1103 audit(1752579551.392:582): pid=5809 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.530924 sshd[5806]: pam_unix(sshd:session): session closed for user core Jul 15 11:39:11.530000 audit[5806]: USER_END pid=5806 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.533326 systemd[1]: sshd@24-10.0.0.133:22-10.0.0.1:57980.service: Deactivated successfully. Jul 15 11:39:11.534229 systemd-logind[1296]: Session 25 logged out. Waiting for processes to exit. Jul 15 11:39:11.534267 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 11:39:11.535220 systemd-logind[1296]: Removed session 25. Jul 15 11:39:11.530000 audit[5806]: CRED_DISP pid=5806 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.539453 kernel: audit: type=1106 audit(1752579551.530:583): pid=5806 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.539523 kernel: audit: type=1104 audit(1752579551.530:584): pid=5806 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 15 11:39:11.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.133:22-10.0.0.1:57980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'