May 13 00:49:27.821379 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 00:49:27.821396 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:49:27.821406 kernel: BIOS-provided physical RAM map: May 13 00:49:27.821412 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 00:49:27.821417 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 00:49:27.821422 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 00:49:27.821429 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 13 00:49:27.821435 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 00:49:27.821440 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 13 00:49:27.821446 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 13 00:49:27.821452 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 13 00:49:27.821457 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 13 00:49:27.821462 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 13 00:49:27.821468 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 00:49:27.821475 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 13 00:49:27.821481 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 13 00:49:27.821487 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 00:49:27.821493 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:49:27.821498 kernel: NX (Execute Disable) protection: active May 13 00:49:27.821504 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 13 00:49:27.821510 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 13 00:49:27.821516 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 13 00:49:27.821522 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 13 00:49:27.821527 kernel: extended physical RAM map: May 13 00:49:27.821533 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 00:49:27.821539 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 00:49:27.821545 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 00:49:27.821551 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 13 00:49:27.821557 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 00:49:27.821562 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable May 13 00:49:27.821568 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 13 00:49:27.821574 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable May 13 00:49:27.821579 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable May 13 00:49:27.821585 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable May 13 00:49:27.821591 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable May 13 00:49:27.821596 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable May 13 00:49:27.821603 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 13 00:49:27.821609 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 13 00:49:27.821616 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 00:49:27.821621 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 13 00:49:27.821630 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 13 00:49:27.821636 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 00:49:27.821642 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:49:27.821649 kernel: efi: EFI v2.70 by EDK II May 13 00:49:27.821656 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 May 13 00:49:27.821662 kernel: random: crng init done May 13 00:49:27.821668 kernel: SMBIOS 2.8 present. May 13 00:49:27.821675 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 13 00:49:27.821681 kernel: Hypervisor detected: KVM May 13 00:49:27.821687 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:49:27.821693 kernel: kvm-clock: cpu 0, msr 1c196001, primary cpu clock May 13 00:49:27.821699 kernel: kvm-clock: using sched offset of 3995665506 cycles May 13 00:49:27.821708 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:49:27.821715 kernel: tsc: Detected 2794.748 MHz processor May 13 00:49:27.821721 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:49:27.821727 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:49:27.821734 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 13 00:49:27.821740 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:49:27.821747 kernel: Using GB pages for direct mapping May 13 00:49:27.821753 kernel: Secure boot disabled May 13 00:49:27.821760 kernel: ACPI: Early table checksum verification disabled May 13 00:49:27.821768 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 13 00:49:27.821774 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 00:49:27.821780 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:27.821787 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:27.821793 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 13 00:49:27.821800 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:27.821806 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:27.821813 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:27.821819 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:27.821827 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 00:49:27.821833 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 13 00:49:27.821839 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 13 00:49:27.821846 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 13 00:49:27.821852 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 13 00:49:27.821859 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 13 00:49:27.821865 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 13 00:49:27.821871 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 13 00:49:27.821878 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 13 00:49:27.821886 kernel: No NUMA configuration found May 13 00:49:27.821892 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 13 00:49:27.821898 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 13 00:49:27.821905 kernel: Zone ranges: May 13 00:49:27.821911 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:49:27.821918 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 13 00:49:27.821924 kernel: Normal empty May 13 00:49:27.821930 kernel: Movable zone start for each node May 13 00:49:27.821936 kernel: Early memory node ranges May 13 00:49:27.821955 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 00:49:27.821962 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 13 00:49:27.821968 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 13 00:49:27.821974 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 13 00:49:27.821981 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 13 00:49:27.821987 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 13 00:49:27.821993 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 13 00:49:27.821999 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:49:27.822006 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 00:49:27.822012 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 13 00:49:27.822020 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:49:27.822026 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 13 00:49:27.822032 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 13 00:49:27.822039 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 13 00:49:27.822045 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:49:27.822051 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:49:27.822058 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:49:27.822064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:49:27.822070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:49:27.822078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:49:27.822085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:49:27.822091 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:49:27.822098 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:49:27.822104 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:49:27.822110 kernel: TSC deadline timer available May 13 00:49:27.822117 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:49:27.822123 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:49:27.822129 kernel: kvm-guest: setup PV sched yield May 13 00:49:27.822137 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 13 00:49:27.822143 kernel: Booting paravirtualized kernel on KVM May 13 00:49:27.822155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:49:27.822162 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 13 00:49:27.822169 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 13 00:49:27.822176 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 13 00:49:27.822182 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:49:27.822189 kernel: kvm-guest: setup async PF for cpu 0 May 13 00:49:27.822195 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 May 13 00:49:27.822202 kernel: kvm-guest: PV spinlocks enabled May 13 00:49:27.822208 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:49:27.822215 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 13 00:49:27.822223 kernel: Policy zone: DMA32 May 13 00:49:27.822231 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:49:27.822238 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:49:27.822245 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:49:27.822253 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:49:27.822260 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:49:27.822267 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 169308K reserved, 0K cma-reserved) May 13 00:49:27.822274 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:49:27.822280 kernel: ftrace: allocating 34584 entries in 136 pages May 13 00:49:27.822287 kernel: ftrace: allocated 136 pages with 2 groups May 13 00:49:27.822294 kernel: rcu: Hierarchical RCU implementation. May 13 00:49:27.822307 kernel: rcu: RCU event tracing is enabled. May 13 00:49:27.822314 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:49:27.822323 kernel: Rude variant of Tasks RCU enabled. May 13 00:49:27.822330 kernel: Tracing variant of Tasks RCU enabled. May 13 00:49:27.822337 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:49:27.822344 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:49:27.822350 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:49:27.822357 kernel: Console: colour dummy device 80x25 May 13 00:49:27.822364 kernel: printk: console [ttyS0] enabled May 13 00:49:27.822370 kernel: ACPI: Core revision 20210730 May 13 00:49:27.822377 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:49:27.822385 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:49:27.822393 kernel: x2apic enabled May 13 00:49:27.822399 kernel: Switched APIC routing to physical x2apic. May 13 00:49:27.822406 kernel: kvm-guest: setup PV IPIs May 13 00:49:27.822413 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:49:27.822419 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:49:27.822426 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 00:49:27.822433 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:49:27.822440 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:49:27.822448 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:49:27.822455 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:49:27.822461 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:49:27.822468 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:49:27.822475 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:49:27.822482 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:49:27.822488 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:49:27.822495 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 13 00:49:27.822502 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:49:27.822510 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:49:27.822517 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:49:27.822523 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:49:27.822530 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 00:49:27.822537 kernel: Freeing SMP alternatives memory: 32K May 13 00:49:27.822543 kernel: pid_max: default: 32768 minimum: 301 May 13 00:49:27.822550 kernel: LSM: Security Framework initializing May 13 00:49:27.822557 kernel: SELinux: Initializing. May 13 00:49:27.822563 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:49:27.822571 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:49:27.822578 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:49:27.822585 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:49:27.822592 kernel: ... version: 0 May 13 00:49:27.822598 kernel: ... bit width: 48 May 13 00:49:27.822605 kernel: ... generic registers: 6 May 13 00:49:27.822612 kernel: ... value mask: 0000ffffffffffff May 13 00:49:27.822618 kernel: ... max period: 00007fffffffffff May 13 00:49:27.822625 kernel: ... fixed-purpose events: 0 May 13 00:49:27.822633 kernel: ... event mask: 000000000000003f May 13 00:49:27.822639 kernel: signal: max sigframe size: 1776 May 13 00:49:27.822646 kernel: rcu: Hierarchical SRCU implementation. May 13 00:49:27.822653 kernel: smp: Bringing up secondary CPUs ... May 13 00:49:27.822659 kernel: x86: Booting SMP configuration: May 13 00:49:27.822666 kernel: .... node #0, CPUs: #1 May 13 00:49:27.822672 kernel: kvm-clock: cpu 1, msr 1c196041, secondary cpu clock May 13 00:49:27.822679 kernel: kvm-guest: setup async PF for cpu 1 May 13 00:49:27.822686 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 May 13 00:49:27.822694 kernel: #2 May 13 00:49:27.822701 kernel: kvm-clock: cpu 2, msr 1c196081, secondary cpu clock May 13 00:49:27.822707 kernel: kvm-guest: setup async PF for cpu 2 May 13 00:49:27.822714 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 May 13 00:49:27.822721 kernel: #3 May 13 00:49:27.822727 kernel: kvm-clock: cpu 3, msr 1c1960c1, secondary cpu clock May 13 00:49:27.822734 kernel: kvm-guest: setup async PF for cpu 3 May 13 00:49:27.822740 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 May 13 00:49:27.822747 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:49:27.822754 kernel: smpboot: Max logical packages: 1 May 13 00:49:27.822762 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 00:49:27.822768 kernel: devtmpfs: initialized May 13 00:49:27.822775 kernel: x86/mm: Memory block size: 128MB May 13 00:49:27.822782 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 13 00:49:27.822789 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 13 00:49:27.822796 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 13 00:49:27.822802 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 13 00:49:27.822809 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 13 00:49:27.822816 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:49:27.822824 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:49:27.822831 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:49:27.822838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:49:27.822844 kernel: audit: initializing netlink subsys (disabled) May 13 00:49:27.822851 kernel: audit: type=2000 audit(1747097367.350:1): state=initialized audit_enabled=0 res=1 May 13 00:49:27.822858 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:49:27.822864 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:49:27.822871 kernel: cpuidle: using governor menu May 13 00:49:27.822877 kernel: ACPI: bus type PCI registered May 13 00:49:27.822885 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:49:27.822892 kernel: dca service started, version 1.12.1 May 13 00:49:27.822899 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:49:27.822906 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 13 00:49:27.822912 kernel: PCI: Using configuration type 1 for base access May 13 00:49:27.822919 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:49:27.822926 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:49:27.822933 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:49:27.822939 kernel: ACPI: Added _OSI(Module Device) May 13 00:49:27.822957 kernel: ACPI: Added _OSI(Processor Device) May 13 00:49:27.822964 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:49:27.822970 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:49:27.822977 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:49:27.822984 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:49:27.822990 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:49:27.822997 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:49:27.823004 kernel: ACPI: Interpreter enabled May 13 00:49:27.823010 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:49:27.823018 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:49:27.823025 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:49:27.823032 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:49:27.823038 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:49:27.823152 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:49:27.823223 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:49:27.823288 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:49:27.823306 kernel: PCI host bridge to bus 0000:00 May 13 00:49:27.823385 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:49:27.823448 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:49:27.823507 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:49:27.823567 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:49:27.823624 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:49:27.823684 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 13 00:49:27.823748 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:49:27.823830 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:49:27.823907 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:49:27.824016 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 13 00:49:27.824088 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 13 00:49:27.824155 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 00:49:27.824222 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 13 00:49:27.825172 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:49:27.825286 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:49:27.825374 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 13 00:49:27.825447 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 13 00:49:27.825517 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 13 00:49:27.825596 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:49:27.825985 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 13 00:49:27.826059 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 13 00:49:27.826125 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 13 00:49:27.826205 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:49:27.826272 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 13 00:49:27.826347 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 13 00:49:27.826414 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 13 00:49:27.826484 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 13 00:49:27.826558 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:49:27.826624 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:49:27.826697 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:49:27.826766 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 13 00:49:27.826833 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 13 00:49:27.826909 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:49:27.827043 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 13 00:49:27.827055 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:49:27.827062 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:49:27.827070 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:49:27.827077 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:49:27.827084 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:49:27.827091 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:49:27.827098 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:49:27.827108 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:49:27.827115 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:49:27.827122 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:49:27.827129 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:49:27.827136 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:49:27.827143 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:49:27.827150 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:49:27.827158 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:49:27.827165 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:49:27.827174 kernel: iommu: Default domain type: Translated May 13 00:49:27.827181 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:49:27.827309 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:49:27.827381 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:49:27.827449 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:49:27.827458 kernel: vgaarb: loaded May 13 00:49:27.827466 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:49:27.827473 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:49:27.827481 kernel: PTP clock support registered May 13 00:49:27.827490 kernel: Registered efivars operations May 13 00:49:27.827497 kernel: PCI: Using ACPI for IRQ routing May 13 00:49:27.827504 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:49:27.827512 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 13 00:49:27.827519 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 13 00:49:27.827526 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] May 13 00:49:27.827533 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] May 13 00:49:27.827539 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 13 00:49:27.827546 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 13 00:49:27.827555 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:49:27.827563 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:49:27.827570 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:49:27.827577 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:49:27.827585 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:49:27.827592 kernel: pnp: PnP ACPI init May 13 00:49:27.827669 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:49:27.827681 kernel: pnp: PnP ACPI: found 6 devices May 13 00:49:27.827690 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:49:27.827697 kernel: NET: Registered PF_INET protocol family May 13 00:49:27.827704 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:49:27.827712 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:49:27.827719 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:49:27.827726 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:49:27.827733 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:49:27.827741 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:49:27.827749 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:49:27.827756 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:49:27.827764 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:49:27.827771 kernel: NET: Registered PF_XDP protocol family May 13 00:49:27.827853 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 13 00:49:27.827923 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 13 00:49:27.828010 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:49:27.828074 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:49:27.828139 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:49:27.828198 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:49:27.828258 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:49:27.828326 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 13 00:49:27.828336 kernel: PCI: CLS 0 bytes, default 64 May 13 00:49:27.828343 kernel: Initialise system trusted keyrings May 13 00:49:27.828350 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:49:27.828358 kernel: Key type asymmetric registered May 13 00:49:27.828365 kernel: Asymmetric key parser 'x509' registered May 13 00:49:27.828375 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:49:27.828382 kernel: io scheduler mq-deadline registered May 13 00:49:27.828401 kernel: io scheduler kyber registered May 13 00:49:27.828409 kernel: io scheduler bfq registered May 13 00:49:27.828417 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:49:27.828425 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:49:27.828433 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:49:27.828440 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:49:27.828448 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:49:27.828457 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:49:27.828464 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:49:27.828472 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:49:27.828479 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:49:27.828486 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:49:27.828559 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:49:27.828624 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:49:27.828687 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:49:27 UTC (1747097367) May 13 00:49:27.828750 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:49:27.828759 kernel: efifb: probing for efifb May 13 00:49:27.828767 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 13 00:49:27.828774 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 13 00:49:27.828781 kernel: efifb: scrolling: redraw May 13 00:49:27.828789 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 00:49:27.828796 kernel: Console: switching to colour frame buffer device 160x50 May 13 00:49:27.828803 kernel: fb0: EFI VGA frame buffer device May 13 00:49:27.828811 kernel: pstore: Registered efi as persistent store backend May 13 00:49:27.828820 kernel: NET: Registered PF_INET6 protocol family May 13 00:49:27.828828 kernel: Segment Routing with IPv6 May 13 00:49:27.828836 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:49:27.828845 kernel: NET: Registered PF_PACKET protocol family May 13 00:49:27.828852 kernel: Key type dns_resolver registered May 13 00:49:27.828859 kernel: IPI shorthand broadcast: enabled May 13 00:49:27.828868 kernel: sched_clock: Marking stable (443062158, 126787139)->(583203533, -13354236) May 13 00:49:27.828875 kernel: registered taskstats version 1 May 13 00:49:27.828883 kernel: Loading compiled-in X.509 certificates May 13 00:49:27.828890 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 00:49:27.828897 kernel: Key type .fscrypt registered May 13 00:49:27.828904 kernel: Key type fscrypt-provisioning registered May 13 00:49:27.828912 kernel: pstore: Using crash dump compression: deflate May 13 00:49:27.828919 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:49:27.828928 kernel: ima: Allocated hash algorithm: sha1 May 13 00:49:27.828935 kernel: ima: No architecture policies found May 13 00:49:27.828960 kernel: clk: Disabling unused clocks May 13 00:49:27.828968 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 00:49:27.828976 kernel: Write protecting the kernel read-only data: 28672k May 13 00:49:27.828983 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 00:49:27.828991 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 00:49:27.828998 kernel: Run /init as init process May 13 00:49:27.829005 kernel: with arguments: May 13 00:49:27.829014 kernel: /init May 13 00:49:27.829021 kernel: with environment: May 13 00:49:27.829028 kernel: HOME=/ May 13 00:49:27.829035 kernel: TERM=linux May 13 00:49:27.829042 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:49:27.829052 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:49:27.829062 systemd[1]: Detected virtualization kvm. May 13 00:49:27.829070 systemd[1]: Detected architecture x86-64. May 13 00:49:27.829079 systemd[1]: Running in initrd. May 13 00:49:27.829087 systemd[1]: No hostname configured, using default hostname. May 13 00:49:27.829094 systemd[1]: Hostname set to . May 13 00:49:27.829102 systemd[1]: Initializing machine ID from VM UUID. May 13 00:49:27.829110 systemd[1]: Queued start job for default target initrd.target. May 13 00:49:27.829118 systemd[1]: Started systemd-ask-password-console.path. May 13 00:49:27.829125 systemd[1]: Reached target cryptsetup.target. May 13 00:49:27.829133 systemd[1]: Reached target paths.target. May 13 00:49:27.829141 systemd[1]: Reached target slices.target. May 13 00:49:27.829150 systemd[1]: Reached target swap.target. May 13 00:49:27.829158 systemd[1]: Reached target timers.target. May 13 00:49:27.829166 systemd[1]: Listening on iscsid.socket. May 13 00:49:27.829173 systemd[1]: Listening on iscsiuio.socket. May 13 00:49:27.829181 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:49:27.829189 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:49:27.829197 systemd[1]: Listening on systemd-journald.socket. May 13 00:49:27.829206 systemd[1]: Listening on systemd-networkd.socket. May 13 00:49:27.829214 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:49:27.829222 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:49:27.829230 systemd[1]: Reached target sockets.target. May 13 00:49:27.829238 systemd[1]: Starting kmod-static-nodes.service... May 13 00:49:27.829245 systemd[1]: Finished network-cleanup.service. May 13 00:49:27.829253 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:49:27.829261 systemd[1]: Starting systemd-journald.service... May 13 00:49:27.829268 systemd[1]: Starting systemd-modules-load.service... May 13 00:49:27.829277 systemd[1]: Starting systemd-resolved.service... May 13 00:49:27.829285 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:49:27.829293 systemd[1]: Finished kmod-static-nodes.service. May 13 00:49:27.829307 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:49:27.829316 kernel: audit: type=1130 audit(1747097367.820:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.829324 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:49:27.829331 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:49:27.829344 systemd-journald[198]: Journal started May 13 00:49:27.829395 systemd-journald[198]: Runtime Journal (/run/log/journal/13ce0b4767434d2493854224bbf3c099) is 6.0M, max 48.4M, 42.4M free. May 13 00:49:27.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.827299 systemd-modules-load[199]: Inserted module 'overlay' May 13 00:49:27.834205 kernel: audit: type=1130 audit(1747097367.828:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.834220 systemd[1]: Started systemd-journald.service. May 13 00:49:27.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.836515 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:49:27.841011 kernel: audit: type=1130 audit(1747097367.834:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.841096 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:49:27.845282 kernel: audit: type=1130 audit(1747097367.840:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.853167 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:49:27.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.854180 systemd[1]: Starting dracut-cmdline.service... May 13 00:49:27.858923 kernel: audit: type=1130 audit(1747097367.852:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.862441 systemd-resolved[200]: Positive Trust Anchors: May 13 00:49:27.862453 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:49:27.870518 kernel: audit: type=1130 audit(1747097367.865:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.871380 dracut-cmdline[215]: dracut-dracut-053 May 13 00:49:27.862480 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:49:27.877537 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:49:27.884015 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:49:27.864591 systemd-resolved[200]: Defaulting to hostname 'linux'. May 13 00:49:27.865272 systemd[1]: Started systemd-resolved.service. May 13 00:49:27.866024 systemd[1]: Reached target nss-lookup.target. May 13 00:49:27.888819 systemd-modules-load[199]: Inserted module 'br_netfilter' May 13 00:49:27.889764 kernel: Bridge firewalling registered May 13 00:49:27.904965 kernel: SCSI subsystem initialized May 13 00:49:27.916235 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:49:27.916258 kernel: device-mapper: uevent: version 1.0.3 May 13 00:49:27.917478 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:49:27.920104 systemd-modules-load[199]: Inserted module 'dm_multipath' May 13 00:49:27.920752 systemd[1]: Finished systemd-modules-load.service. May 13 00:49:27.925476 kernel: audit: type=1130 audit(1747097367.920:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.922020 systemd[1]: Starting systemd-sysctl.service... May 13 00:49:27.928969 kernel: Loading iSCSI transport class v2.0-870. May 13 00:49:27.931391 systemd[1]: Finished systemd-sysctl.service. May 13 00:49:27.935608 kernel: audit: type=1130 audit(1747097367.930:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.948967 kernel: iscsi: registered transport (tcp) May 13 00:49:27.969969 kernel: iscsi: registered transport (qla4xxx) May 13 00:49:27.969986 kernel: QLogic iSCSI HBA Driver May 13 00:49:27.998318 systemd[1]: Finished dracut-cmdline.service. May 13 00:49:28.002556 kernel: audit: type=1130 audit(1747097367.997:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:27.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:28.002583 systemd[1]: Starting dracut-pre-udev.service... May 13 00:49:28.047976 kernel: raid6: avx2x4 gen() 30915 MB/s May 13 00:49:28.064966 kernel: raid6: avx2x4 xor() 8261 MB/s May 13 00:49:28.081964 kernel: raid6: avx2x2 gen() 32685 MB/s May 13 00:49:28.098974 kernel: raid6: avx2x2 xor() 19237 MB/s May 13 00:49:28.115968 kernel: raid6: avx2x1 gen() 26496 MB/s May 13 00:49:28.132966 kernel: raid6: avx2x1 xor() 15331 MB/s May 13 00:49:28.149965 kernel: raid6: sse2x4 gen() 14789 MB/s May 13 00:49:28.166968 kernel: raid6: sse2x4 xor() 7629 MB/s May 13 00:49:28.183969 kernel: raid6: sse2x2 gen() 16393 MB/s May 13 00:49:28.200966 kernel: raid6: sse2x2 xor() 9851 MB/s May 13 00:49:28.217966 kernel: raid6: sse2x1 gen() 12555 MB/s May 13 00:49:28.235365 kernel: raid6: sse2x1 xor() 7765 MB/s May 13 00:49:28.235390 kernel: raid6: using algorithm avx2x2 gen() 32685 MB/s May 13 00:49:28.235400 kernel: raid6: .... xor() 19237 MB/s, rmw enabled May 13 00:49:28.236084 kernel: raid6: using avx2x2 recovery algorithm May 13 00:49:28.247975 kernel: xor: automatically using best checksumming function avx May 13 00:49:28.335970 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 00:49:28.343634 systemd[1]: Finished dracut-pre-udev.service. May 13 00:49:28.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:28.345000 audit: BPF prog-id=7 op=LOAD May 13 00:49:28.345000 audit: BPF prog-id=8 op=LOAD May 13 00:49:28.345553 systemd[1]: Starting systemd-udevd.service... May 13 00:49:28.357354 systemd-udevd[400]: Using default interface naming scheme 'v252'. May 13 00:49:28.362035 systemd[1]: Started systemd-udevd.service. May 13 00:49:28.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:28.362999 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:49:28.372962 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 13 00:49:28.396756 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:49:28.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:28.397900 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:49:28.428200 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:49:28.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:28.456028 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:49:28.464364 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:49:28.464376 kernel: GPT:9289727 != 19775487 May 13 00:49:28.464390 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:49:28.464399 kernel: GPT:9289727 != 19775487 May 13 00:49:28.464407 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:49:28.464415 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:49:28.464424 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:49:28.468959 kernel: libata version 3.00 loaded. May 13 00:49:28.477355 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:49:28.493013 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:49:28.493027 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:49:28.493116 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:49:28.493383 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:49:28.493399 kernel: scsi host0: ahci May 13 00:49:28.493541 kernel: AES CTR mode by8 optimization enabled May 13 00:49:28.493553 kernel: scsi host1: ahci May 13 00:49:28.493642 kernel: scsi host2: ahci May 13 00:49:28.493729 kernel: scsi host3: ahci May 13 00:49:28.493823 kernel: scsi host4: ahci May 13 00:49:28.493906 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) May 13 00:49:28.493915 kernel: scsi host5: ahci May 13 00:49:28.494017 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 May 13 00:49:28.494027 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 May 13 00:49:28.494035 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 May 13 00:49:28.494044 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 May 13 00:49:28.494052 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 May 13 00:49:28.494064 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 May 13 00:49:28.488975 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:49:28.490509 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:49:28.494757 systemd[1]: Starting disk-uuid.service... May 13 00:49:28.501903 disk-uuid[488]: Primary Header is updated. May 13 00:49:28.501903 disk-uuid[488]: Secondary Entries is updated. May 13 00:49:28.501903 disk-uuid[488]: Secondary Header is updated. May 13 00:49:28.508386 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:49:28.512419 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:49:28.518696 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:49:28.803982 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:49:28.804060 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:49:28.804974 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:49:28.807742 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:49:28.807816 kernel: ata3.00: applying bridge limits May 13 00:49:28.807826 kernel: ata3.00: configured for UDMA/100 May 13 00:49:28.809036 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:49:28.813962 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:49:28.813986 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:49:28.814969 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:49:28.848013 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:49:28.865545 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:49:28.865561 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:49:29.542019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:49:29.542100 disk-uuid[493]: The operation has completed successfully. May 13 00:49:29.564173 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:49:29.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.564257 systemd[1]: Finished disk-uuid.service. May 13 00:49:29.568121 systemd[1]: Starting verity-setup.service... May 13 00:49:29.579959 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:49:29.598033 systemd[1]: Found device dev-mapper-usr.device. May 13 00:49:29.599183 systemd[1]: Mounting sysusr-usr.mount... May 13 00:49:29.602189 systemd[1]: Finished verity-setup.service. May 13 00:49:29.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.657747 systemd[1]: Mounted sysusr-usr.mount. May 13 00:49:29.659136 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:49:29.658275 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:49:29.658911 systemd[1]: Starting ignition-setup.service... May 13 00:49:29.661114 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:49:29.668693 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:49:29.668751 kernel: BTRFS info (device vda6): using free space tree May 13 00:49:29.668761 kernel: BTRFS info (device vda6): has skinny extents May 13 00:49:29.675811 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:49:29.684724 systemd[1]: Finished ignition-setup.service. May 13 00:49:29.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.686238 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:49:29.720739 ignition[646]: Ignition 2.14.0 May 13 00:49:29.720749 ignition[646]: Stage: fetch-offline May 13 00:49:29.721023 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:49:29.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.722000 audit: BPF prog-id=9 op=LOAD May 13 00:49:29.720825 ignition[646]: no configs at "/usr/lib/ignition/base.d" May 13 00:49:29.723439 systemd[1]: Starting systemd-networkd.service... May 13 00:49:29.720834 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:29.720924 ignition[646]: parsed url from cmdline: "" May 13 00:49:29.720927 ignition[646]: no config URL provided May 13 00:49:29.720931 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:49:29.720938 ignition[646]: no config at "/usr/lib/ignition/user.ign" May 13 00:49:29.722990 ignition[646]: op(1): [started] loading QEMU firmware config module May 13 00:49:29.722995 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:49:29.733071 ignition[646]: op(1): [finished] loading QEMU firmware config module May 13 00:49:29.744740 systemd-networkd[723]: lo: Link UP May 13 00:49:29.744750 systemd-networkd[723]: lo: Gained carrier May 13 00:49:29.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.745153 systemd-networkd[723]: Enumeration completed May 13 00:49:29.745223 systemd[1]: Started systemd-networkd.service. May 13 00:49:29.746476 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:49:29.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.746758 systemd[1]: Reached target network.target. May 13 00:49:29.747485 systemd-networkd[723]: eth0: Link UP May 13 00:49:29.747489 systemd-networkd[723]: eth0: Gained carrier May 13 00:49:29.758103 iscsid[730]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:49:29.758103 iscsid[730]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:49:29.758103 iscsid[730]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:49:29.758103 iscsid[730]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:49:29.758103 iscsid[730]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:49:29.758103 iscsid[730]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:49:29.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.748788 systemd[1]: Starting iscsiuio.service... May 13 00:49:29.752531 systemd[1]: Started iscsiuio.service. May 13 00:49:29.754183 systemd[1]: Starting iscsid.service... May 13 00:49:29.758183 systemd[1]: Started iscsid.service. May 13 00:49:29.759587 systemd[1]: Starting dracut-initqueue.service... May 13 00:49:29.768505 systemd[1]: Finished dracut-initqueue.service. May 13 00:49:29.770337 systemd[1]: Reached target remote-fs-pre.target. May 13 00:49:29.772685 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:49:29.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.775118 systemd[1]: Reached target remote-fs.target. May 13 00:49:29.776577 systemd[1]: Starting dracut-pre-mount.service... May 13 00:49:29.783215 systemd[1]: Finished dracut-pre-mount.service. May 13 00:49:29.807300 ignition[646]: parsing config with SHA512: fab048a8e4193f82c63ec0ee4fc6715df649df3cbeba407e26c24f1bf8435cbb1a69a765d99f8104e6e8bf1d9d2e93bd50979ab348f4fe5d738561f1efbff23f May 13 00:49:29.813504 unknown[646]: fetched base config from "system" May 13 00:49:29.813516 unknown[646]: fetched user config from "qemu" May 13 00:49:29.814004 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:49:29.814596 ignition[646]: fetch-offline: fetch-offline passed May 13 00:49:29.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.816182 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:49:29.814641 ignition[646]: Ignition finished successfully May 13 00:49:29.816671 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:49:29.819048 systemd[1]: Starting ignition-kargs.service... May 13 00:49:29.830877 ignition[744]: Ignition 2.14.0 May 13 00:49:29.830886 ignition[744]: Stage: kargs May 13 00:49:29.830980 ignition[744]: no configs at "/usr/lib/ignition/base.d" May 13 00:49:29.830990 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:29.831911 ignition[744]: kargs: kargs passed May 13 00:49:29.831941 ignition[744]: Ignition finished successfully May 13 00:49:29.836293 systemd[1]: Finished ignition-kargs.service. May 13 00:49:29.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.837393 systemd[1]: Starting ignition-disks.service... May 13 00:49:29.843782 ignition[750]: Ignition 2.14.0 May 13 00:49:29.843792 ignition[750]: Stage: disks May 13 00:49:29.843874 ignition[750]: no configs at "/usr/lib/ignition/base.d" May 13 00:49:29.843882 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:29.844940 ignition[750]: disks: disks passed May 13 00:49:29.844986 ignition[750]: Ignition finished successfully May 13 00:49:29.848457 systemd[1]: Finished ignition-disks.service. May 13 00:49:29.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.850083 systemd[1]: Reached target initrd-root-device.target. May 13 00:49:29.851804 systemd[1]: Reached target local-fs-pre.target. May 13 00:49:29.853383 systemd[1]: Reached target local-fs.target. May 13 00:49:29.854862 systemd[1]: Reached target sysinit.target. May 13 00:49:29.856334 systemd[1]: Reached target basic.target. May 13 00:49:29.858500 systemd[1]: Starting systemd-fsck-root.service... May 13 00:49:29.868394 systemd-fsck[758]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 13 00:49:29.873307 systemd[1]: Finished systemd-fsck-root.service. May 13 00:49:29.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.875884 systemd[1]: Mounting sysroot.mount... May 13 00:49:29.882609 systemd[1]: Mounted sysroot.mount. May 13 00:49:29.883892 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:49:29.882928 systemd[1]: Reached target initrd-root-fs.target. May 13 00:49:29.885440 systemd[1]: Mounting sysroot-usr.mount... May 13 00:49:29.885980 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:49:29.886010 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:49:29.886029 systemd[1]: Reached target ignition-diskful.target. May 13 00:49:29.887789 systemd[1]: Mounted sysroot-usr.mount. May 13 00:49:29.891158 systemd[1]: Starting initrd-setup-root.service... May 13 00:49:29.897999 initrd-setup-root[768]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:49:29.902140 initrd-setup-root[776]: cut: /sysroot/etc/group: No such file or directory May 13 00:49:29.905304 initrd-setup-root[784]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:49:29.908963 initrd-setup-root[792]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:49:29.933530 systemd[1]: Finished initrd-setup-root.service. May 13 00:49:29.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.935743 systemd[1]: Starting ignition-mount.service... May 13 00:49:29.937708 systemd[1]: Starting sysroot-boot.service... May 13 00:49:29.940231 bash[809]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:49:29.948671 ignition[811]: INFO : Ignition 2.14.0 May 13 00:49:29.948671 ignition[811]: INFO : Stage: mount May 13 00:49:29.950252 ignition[811]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:49:29.950252 ignition[811]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:29.953221 systemd[1]: Finished sysroot-boot.service. May 13 00:49:29.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:29.954696 ignition[811]: INFO : mount: mount passed May 13 00:49:29.955444 ignition[811]: INFO : Ignition finished successfully May 13 00:49:29.957039 systemd[1]: Finished ignition-mount.service. May 13 00:49:29.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:30.608284 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:49:30.616583 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (820) May 13 00:49:30.616607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:49:30.616617 kernel: BTRFS info (device vda6): using free space tree May 13 00:49:30.617382 kernel: BTRFS info (device vda6): has skinny extents May 13 00:49:30.620880 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:49:30.622480 systemd[1]: Starting ignition-files.service... May 13 00:49:30.636124 ignition[840]: INFO : Ignition 2.14.0 May 13 00:49:30.636124 ignition[840]: INFO : Stage: files May 13 00:49:30.637785 ignition[840]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:49:30.637785 ignition[840]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:30.637785 ignition[840]: DEBUG : files: compiled without relabeling support, skipping May 13 00:49:30.641294 ignition[840]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:49:30.641294 ignition[840]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:49:30.643950 ignition[840]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:49:30.645395 ignition[840]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:49:30.647168 unknown[840]: wrote ssh authorized keys file for user: core May 13 00:49:30.648234 ignition[840]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:49:30.649843 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:49:30.651569 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:49:30.653228 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:49:30.655073 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:49:30.744784 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:49:30.964989 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:49:30.964989 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:49:30.968980 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:49:31.318121 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:49:31.429107 systemd-networkd[723]: eth0: Gained IPv6LL May 13 00:49:31.695929 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:49:31.695929 ignition[840]: INFO : files: op(c): [started] processing unit "containerd.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:49:31.700000 ignition[840]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:49:31.700000 ignition[840]: INFO : files: op(c): [finished] processing unit "containerd.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:49:31.700000 ignition[840]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:49:31.729264 ignition[840]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:49:31.730970 ignition[840]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:49:31.730970 ignition[840]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:49:31.730970 ignition[840]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:49:31.730970 ignition[840]: INFO : files: files passed May 13 00:49:31.730970 ignition[840]: INFO : Ignition finished successfully May 13 00:49:31.738033 systemd[1]: Finished ignition-files.service. May 13 00:49:31.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.739151 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:49:31.739921 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:49:31.740430 systemd[1]: Starting ignition-quench.service... May 13 00:49:31.744081 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:49:31.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.744142 systemd[1]: Finished ignition-quench.service. May 13 00:49:31.748687 initrd-setup-root-after-ignition[865]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:49:31.751391 initrd-setup-root-after-ignition[867]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:49:31.753390 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:49:31.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.753711 systemd[1]: Reached target ignition-complete.target. May 13 00:49:31.756119 systemd[1]: Starting initrd-parse-etc.service... May 13 00:49:31.767619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:49:31.767696 systemd[1]: Finished initrd-parse-etc.service. May 13 00:49:31.769425 systemd[1]: Reached target initrd-fs.target. May 13 00:49:31.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.770816 systemd[1]: Reached target initrd.target. May 13 00:49:31.771253 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:49:31.771794 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:49:31.780854 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:49:31.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.781838 systemd[1]: Starting initrd-cleanup.service... May 13 00:49:31.788912 systemd[1]: Stopped target nss-lookup.target. May 13 00:49:31.789413 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:49:31.791270 systemd[1]: Stopped target timers.target. May 13 00:49:31.792762 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:49:31.792847 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:49:31.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.794196 systemd[1]: Stopped target initrd.target. May 13 00:49:31.795699 systemd[1]: Stopped target basic.target. May 13 00:49:31.797080 systemd[1]: Stopped target ignition-complete.target. May 13 00:49:31.798528 systemd[1]: Stopped target ignition-diskful.target. May 13 00:49:31.800035 systemd[1]: Stopped target initrd-root-device.target. May 13 00:49:31.800553 systemd[1]: Stopped target remote-fs.target. May 13 00:49:31.802694 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:49:31.804215 systemd[1]: Stopped target sysinit.target. May 13 00:49:31.805972 systemd[1]: Stopped target local-fs.target. May 13 00:49:31.807418 systemd[1]: Stopped target local-fs-pre.target. May 13 00:49:31.808831 systemd[1]: Stopped target swap.target. May 13 00:49:31.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.809342 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:49:31.809427 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:49:31.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.811287 systemd[1]: Stopped target cryptsetup.target. May 13 00:49:31.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.812575 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:49:31.812660 systemd[1]: Stopped dracut-initqueue.service. May 13 00:49:31.814247 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:49:31.814330 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:49:31.815700 systemd[1]: Stopped target paths.target. May 13 00:49:31.817289 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:49:31.820980 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:49:31.821494 systemd[1]: Stopped target slices.target. May 13 00:49:31.823733 systemd[1]: Stopped target sockets.target. May 13 00:49:31.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.825226 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:49:31.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.825312 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:49:31.826589 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:49:31.826666 systemd[1]: Stopped ignition-files.service. May 13 00:49:31.828915 systemd[1]: Stopping ignition-mount.service... May 13 00:49:31.837286 ignition[880]: INFO : Ignition 2.14.0 May 13 00:49:31.837286 ignition[880]: INFO : Stage: umount May 13 00:49:31.837286 ignition[880]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:49:31.837286 ignition[880]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:31.837286 ignition[880]: INFO : umount: umount passed May 13 00:49:31.837286 ignition[880]: INFO : Ignition finished successfully May 13 00:49:31.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.846492 iscsid[730]: iscsid shutting down. May 13 00:49:31.830006 systemd[1]: Stopping iscsid.service... May 13 00:49:31.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.830346 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:49:31.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.830453 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:49:31.833529 systemd[1]: Stopping sysroot-boot.service... May 13 00:49:31.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.837617 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:49:31.837728 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:49:31.839359 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:49:31.839445 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:49:31.842261 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:49:31.842333 systemd[1]: Stopped iscsid.service. May 13 00:49:31.844106 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:49:31.844169 systemd[1]: Stopped ignition-mount.service. May 13 00:49:31.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.846250 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:49:31.846676 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:49:31.846740 systemd[1]: Closed iscsid.socket. May 13 00:49:31.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.848011 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:49:31.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.848041 systemd[1]: Stopped ignition-disks.service. May 13 00:49:31.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.849481 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:49:31.849512 systemd[1]: Stopped ignition-kargs.service. May 13 00:49:31.850331 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:49:31.850360 systemd[1]: Stopped ignition-setup.service. May 13 00:49:31.851212 systemd[1]: Stopping iscsiuio.service... May 13 00:49:31.853021 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:49:31.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.883000 audit: BPF prog-id=6 op=UNLOAD May 13 00:49:31.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.853084 systemd[1]: Finished initrd-cleanup.service. May 13 00:49:31.854514 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:49:31.854575 systemd[1]: Stopped iscsiuio.service. May 13 00:49:31.856549 systemd[1]: Stopped target network.target. May 13 00:49:31.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.857738 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:49:31.857764 systemd[1]: Closed iscsiuio.socket. May 13 00:49:31.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.859262 systemd[1]: Stopping systemd-networkd.service... May 13 00:49:31.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.861051 systemd[1]: Stopping systemd-resolved.service... May 13 00:49:31.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.862983 systemd-networkd[723]: eth0: DHCPv6 lease lost May 13 00:49:31.897000 audit: BPF prog-id=9 op=UNLOAD May 13 00:49:31.864251 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:49:31.864319 systemd[1]: Stopped systemd-networkd.service. May 13 00:49:31.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.866923 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:49:31.866962 systemd[1]: Closed systemd-networkd.socket. May 13 00:49:31.869086 systemd[1]: Stopping network-cleanup.service... May 13 00:49:31.870266 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:49:31.870301 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:49:31.871830 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:49:31.871859 systemd[1]: Stopped systemd-sysctl.service. May 13 00:49:31.872709 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:49:31.872737 systemd[1]: Stopped systemd-modules-load.service. May 13 00:49:31.874526 systemd[1]: Stopping systemd-udevd.service... May 13 00:49:31.876928 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:49:31.877321 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:49:31.877393 systemd[1]: Stopped systemd-resolved.service. May 13 00:49:31.882850 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:49:31.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.882928 systemd[1]: Stopped network-cleanup.service. May 13 00:49:31.884082 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:49:31.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.884178 systemd[1]: Stopped systemd-udevd.service. May 13 00:49:31.886800 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:49:31.886828 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:49:31.888436 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:49:31.888459 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:49:31.889907 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:49:31.889936 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:49:31.925000 audit: BPF prog-id=5 op=UNLOAD May 13 00:49:31.925000 audit: BPF prog-id=4 op=UNLOAD May 13 00:49:31.925000 audit: BPF prog-id=3 op=UNLOAD May 13 00:49:31.891617 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:49:31.925000 audit: BPF prog-id=8 op=UNLOAD May 13 00:49:31.925000 audit: BPF prog-id=7 op=UNLOAD May 13 00:49:31.891648 systemd[1]: Stopped dracut-cmdline.service. May 13 00:49:31.893453 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:49:31.893483 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:49:31.895604 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:49:31.896680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:49:31.896720 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:49:31.899921 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:49:31.900020 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:49:31.913231 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:49:31.913302 systemd[1]: Stopped sysroot-boot.service. May 13 00:49:31.914818 systemd[1]: Reached target initrd-switch-root.target. May 13 00:49:31.916540 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:49:31.916574 systemd[1]: Stopped initrd-setup-root.service. May 13 00:49:31.918637 systemd[1]: Starting initrd-switch-root.service... May 13 00:49:31.923113 systemd[1]: Switching root. May 13 00:49:31.943654 systemd-journald[198]: Journal stopped May 13 00:49:34.451331 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). May 13 00:49:34.451388 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:49:34.451400 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:49:34.451414 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:49:34.451424 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:49:34.451433 kernel: SELinux: policy capability open_perms=1 May 13 00:49:34.451442 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:49:34.451452 kernel: SELinux: policy capability always_check_network=0 May 13 00:49:34.451461 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:49:34.451475 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:49:34.451485 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:49:34.451494 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:49:34.451505 systemd[1]: Successfully loaded SELinux policy in 40.106ms. May 13 00:49:34.451531 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.172ms. May 13 00:49:34.451542 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:49:34.451553 systemd[1]: Detected virtualization kvm. May 13 00:49:34.451563 systemd[1]: Detected architecture x86-64. May 13 00:49:34.451574 systemd[1]: Detected first boot. May 13 00:49:34.451584 systemd[1]: Initializing machine ID from VM UUID. May 13 00:49:34.451594 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:49:34.451604 kernel: kauditd_printk_skb: 72 callbacks suppressed May 13 00:49:34.451618 kernel: audit: type=1400 audit(1747097372.337:83): avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:49:34.451631 kernel: audit: type=1300 audit(1747097372.337:83): arch=c000003e syscall=188 success=yes exit=0 a0=c0001896b2 a1=c00002cb40 a2=c00002aa40 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:34.451642 kernel: audit: type=1327 audit(1747097372.337:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:49:34.451652 kernel: audit: type=1400 audit(1747097372.339:84): avc: denied { associate } for pid=931 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:49:34.451663 kernel: audit: type=1300 audit(1747097372.339:84): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000189789 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:34.451672 kernel: audit: type=1307 audit(1747097372.339:84): cwd="/" May 13 00:49:34.451682 kernel: audit: type=1302 audit(1747097372.339:84): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:34.451693 kernel: audit: type=1302 audit(1747097372.339:84): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:34.451703 kernel: audit: type=1327 audit(1747097372.339:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:49:34.451713 systemd[1]: Populated /etc with preset unit settings. May 13 00:49:34.451724 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:49:34.451734 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:49:34.451746 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:49:34.451759 systemd[1]: Queued start job for default target multi-user.target. May 13 00:49:34.451771 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:49:34.451781 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:49:34.451791 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:49:34.451800 systemd[1]: Created slice system-getty.slice. May 13 00:49:34.451811 systemd[1]: Created slice system-modprobe.slice. May 13 00:49:34.451821 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:49:34.451831 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:49:34.451843 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:49:34.451853 systemd[1]: Created slice user.slice. May 13 00:49:34.451864 systemd[1]: Started systemd-ask-password-console.path. May 13 00:49:34.451874 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:49:34.451884 systemd[1]: Set up automount boot.automount. May 13 00:49:34.451894 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:49:34.451904 systemd[1]: Reached target integritysetup.target. May 13 00:49:34.451914 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:49:34.451924 systemd[1]: Reached target remote-fs.target. May 13 00:49:34.451936 systemd[1]: Reached target slices.target. May 13 00:49:34.451968 systemd[1]: Reached target swap.target. May 13 00:49:34.451979 systemd[1]: Reached target torcx.target. May 13 00:49:34.451989 systemd[1]: Reached target veritysetup.target. May 13 00:49:34.451999 systemd[1]: Listening on systemd-coredump.socket. May 13 00:49:34.452009 systemd[1]: Listening on systemd-initctl.socket. May 13 00:49:34.452019 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:49:34.452029 kernel: audit: type=1400 audit(1747097374.375:85): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:49:34.452039 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:49:34.452050 systemd[1]: Listening on systemd-journald.socket. May 13 00:49:34.452061 systemd[1]: Listening on systemd-networkd.socket. May 13 00:49:34.452071 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:49:34.452080 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:49:34.452091 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:49:34.452101 systemd[1]: Mounting dev-hugepages.mount... May 13 00:49:34.452119 systemd[1]: Mounting dev-mqueue.mount... May 13 00:49:34.452129 systemd[1]: Mounting media.mount... May 13 00:49:34.452140 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:34.452150 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:49:34.452161 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:49:34.452171 systemd[1]: Mounting tmp.mount... May 13 00:49:34.452181 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:49:34.452192 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:34.452203 systemd[1]: Starting kmod-static-nodes.service... May 13 00:49:34.452213 systemd[1]: Starting modprobe@configfs.service... May 13 00:49:34.452223 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:34.452232 systemd[1]: Starting modprobe@drm.service... May 13 00:49:34.452242 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:34.452253 systemd[1]: Starting modprobe@fuse.service... May 13 00:49:34.452263 systemd[1]: Starting modprobe@loop.service... May 13 00:49:34.452273 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:49:34.452284 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:49:34.452293 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 13 00:49:34.452303 systemd[1]: Starting systemd-journald.service... May 13 00:49:34.452314 kernel: fuse: init (API version 7.34) May 13 00:49:34.452324 systemd[1]: Starting systemd-modules-load.service... May 13 00:49:34.452335 kernel: loop: module loaded May 13 00:49:34.452345 systemd[1]: Starting systemd-network-generator.service... May 13 00:49:34.452354 systemd[1]: Starting systemd-remount-fs.service... May 13 00:49:34.452365 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:49:34.452376 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:34.452386 systemd[1]: Mounted dev-hugepages.mount. May 13 00:49:34.452397 systemd-journald[1022]: Journal started May 13 00:49:34.452441 systemd-journald[1022]: Runtime Journal (/run/log/journal/13ce0b4767434d2493854224bbf3c099) is 6.0M, max 48.4M, 42.4M free. May 13 00:49:34.375000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:49:34.375000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 00:49:34.449000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:49:34.449000 audit[1022]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd58fb8140 a2=4000 a3=7ffd58fb81dc items=0 ppid=1 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:34.449000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:49:34.456501 systemd[1]: Started systemd-journald.service. May 13 00:49:34.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.455904 systemd[1]: Mounted dev-mqueue.mount. May 13 00:49:34.456793 systemd[1]: Mounted media.mount. May 13 00:49:34.457706 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:49:34.458698 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:49:34.459668 systemd[1]: Mounted tmp.mount. May 13 00:49:34.460785 systemd[1]: Finished kmod-static-nodes.service. May 13 00:49:34.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.461846 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:49:34.462100 systemd[1]: Finished modprobe@configfs.service. May 13 00:49:34.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.463160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:34.463340 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:34.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.464409 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:49:34.466239 systemd[1]: Finished modprobe@drm.service. May 13 00:49:34.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.467361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:34.467573 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:34.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.468654 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:49:34.468829 systemd[1]: Finished modprobe@fuse.service. May 13 00:49:34.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.469885 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:34.470102 systemd[1]: Finished modprobe@loop.service. May 13 00:49:34.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.471473 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:49:34.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.472719 systemd[1]: Finished systemd-modules-load.service. May 13 00:49:34.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.474069 systemd[1]: Finished systemd-network-generator.service. May 13 00:49:34.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.475443 systemd[1]: Finished systemd-remount-fs.service. May 13 00:49:34.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.476680 systemd[1]: Reached target network-pre.target. May 13 00:49:34.478771 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:49:34.480536 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:49:34.481303 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:49:34.482890 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:49:34.484692 systemd[1]: Starting systemd-journal-flush.service... May 13 00:49:34.487040 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:49:34.487870 systemd[1]: Starting systemd-random-seed.service... May 13 00:49:34.491488 systemd-journald[1022]: Time spent on flushing to /var/log/journal/13ce0b4767434d2493854224bbf3c099 is 20.173ms for 1096 entries. May 13 00:49:34.491488 systemd-journald[1022]: System Journal (/var/log/journal/13ce0b4767434d2493854224bbf3c099) is 8.0M, max 195.6M, 187.6M free. May 13 00:49:34.521886 systemd-journald[1022]: Received client request to flush runtime journal. May 13 00:49:34.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.489160 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:49:34.490244 systemd[1]: Starting systemd-sysctl.service... May 13 00:49:34.492481 systemd[1]: Starting systemd-sysusers.service... May 13 00:49:34.496185 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:49:34.497203 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:49:34.503247 systemd[1]: Finished systemd-random-seed.service. May 13 00:49:34.504276 systemd[1]: Reached target first-boot-complete.target. May 13 00:49:34.508656 systemd[1]: Finished systemd-sysctl.service. May 13 00:49:34.513264 systemd[1]: Finished systemd-sysusers.service. May 13 00:49:34.515128 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:49:34.517185 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:49:34.519088 systemd[1]: Starting systemd-udev-settle.service... May 13 00:49:34.522735 systemd[1]: Finished systemd-journal-flush.service. May 13 00:49:34.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.526740 udevadm[1070]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:49:34.535574 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:49:34.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.904101 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:49:34.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.906179 systemd[1]: Starting systemd-udevd.service... May 13 00:49:34.922019 systemd-udevd[1075]: Using default interface naming scheme 'v252'. May 13 00:49:34.933778 systemd[1]: Started systemd-udevd.service. May 13 00:49:34.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.936746 systemd[1]: Starting systemd-networkd.service... May 13 00:49:34.949109 systemd[1]: Starting systemd-userdbd.service... May 13 00:49:34.960605 systemd[1]: Found device dev-ttyS0.device. May 13 00:49:34.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.983920 systemd[1]: Started systemd-userdbd.service. May 13 00:49:34.987699 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:49:35.002969 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:49:35.006965 kernel: ACPI: button: Power Button [PWRF] May 13 00:49:35.024218 systemd-networkd[1088]: lo: Link UP May 13 00:49:35.024498 systemd-networkd[1088]: lo: Gained carrier May 13 00:49:35.024905 systemd-networkd[1088]: Enumeration completed May 13 00:49:35.025342 systemd[1]: Started systemd-networkd.service. May 13 00:49:35.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.026538 systemd-networkd[1088]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:49:35.027488 systemd-networkd[1088]: eth0: Link UP May 13 00:49:35.027602 systemd-networkd[1088]: eth0: Gained carrier May 13 00:49:35.023000 audit[1076]: AVC avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:49:35.023000 audit[1076]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561b75f51550 a1=338ac a2=7f4e0f9dbbc5 a3=5 items=110 ppid=1075 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:35.023000 audit: CWD cwd="/" May 13 00:49:35.023000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=1 name=(null) inode=13489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=2 name=(null) inode=13489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=3 name=(null) inode=13490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=4 name=(null) inode=13489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=5 name=(null) inode=13491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=6 name=(null) inode=13489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=7 name=(null) inode=13492 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=8 name=(null) inode=13492 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=9 name=(null) inode=13493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=10 name=(null) inode=13492 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=11 name=(null) inode=13494 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=12 name=(null) inode=13492 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=13 name=(null) inode=13495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=14 name=(null) inode=13492 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=15 name=(null) inode=13496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=16 name=(null) inode=13492 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=17 name=(null) inode=13497 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=18 name=(null) inode=13489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=19 name=(null) inode=13498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=20 name=(null) inode=13498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=21 name=(null) inode=13499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=22 name=(null) inode=13498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=23 name=(null) inode=13500 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=24 name=(null) inode=13498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=25 name=(null) inode=13501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=26 name=(null) inode=13498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=27 name=(null) inode=13502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=28 name=(null) inode=13498 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=29 name=(null) inode=13503 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=30 name=(null) inode=13489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=31 name=(null) inode=13504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=32 name=(null) inode=13504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=33 name=(null) inode=13505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=34 name=(null) inode=13504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=35 name=(null) inode=13506 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=36 name=(null) inode=13504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=37 name=(null) inode=13507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=38 name=(null) inode=13504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=39 name=(null) inode=13508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=40 name=(null) inode=13504 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=41 name=(null) inode=13509 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=42 name=(null) inode=13489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=43 name=(null) inode=13510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=44 name=(null) inode=13510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=45 name=(null) inode=13511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=46 name=(null) inode=13510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=47 name=(null) inode=13512 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=48 name=(null) inode=13510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=49 name=(null) inode=13513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=50 name=(null) inode=13510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=51 name=(null) inode=13514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=52 name=(null) inode=13510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=53 name=(null) inode=13515 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=55 name=(null) inode=13516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=56 name=(null) inode=13516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=57 name=(null) inode=13517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=58 name=(null) inode=13516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=59 name=(null) inode=13518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=60 name=(null) inode=13516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=61 name=(null) inode=13519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=62 name=(null) inode=13519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=63 name=(null) inode=13520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=64 name=(null) inode=13519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=65 name=(null) inode=13521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=66 name=(null) inode=13519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=67 name=(null) inode=13522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=68 name=(null) inode=13519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=69 name=(null) inode=13523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=70 name=(null) inode=13519 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=71 name=(null) inode=13524 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=72 name=(null) inode=13516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=73 name=(null) inode=13525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=74 name=(null) inode=13525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=75 name=(null) inode=13526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=76 name=(null) inode=13525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=77 name=(null) inode=13527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=78 name=(null) inode=13525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=79 name=(null) inode=13528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=80 name=(null) inode=13525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=81 name=(null) inode=13529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=82 name=(null) inode=13525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=83 name=(null) inode=13530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=84 name=(null) inode=13516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=85 name=(null) inode=13531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=86 name=(null) inode=13531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=87 name=(null) inode=13532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=88 name=(null) inode=13531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=89 name=(null) inode=13533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=90 name=(null) inode=13531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=91 name=(null) inode=13534 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=92 name=(null) inode=13531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=93 name=(null) inode=13535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=94 name=(null) inode=13531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=95 name=(null) inode=13536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=96 name=(null) inode=13516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=97 name=(null) inode=13537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=98 name=(null) inode=13537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=99 name=(null) inode=13538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=100 name=(null) inode=13537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=101 name=(null) inode=13539 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=102 name=(null) inode=13537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=103 name=(null) inode=13540 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=104 name=(null) inode=13537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=105 name=(null) inode=13541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=106 name=(null) inode=13537 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=107 name=(null) inode=13542 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PATH item=109 name=(null) inode=13543 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:35.023000 audit: PROCTITLE proctitle="(udev-worker)" May 13 00:49:35.041972 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:49:35.051357 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 00:49:35.069677 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:49:35.069785 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:49:35.069891 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:49:35.051522 systemd-networkd[1088]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:49:35.080981 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:49:35.117532 kernel: kvm: Nested Virtualization enabled May 13 00:49:35.117582 kernel: SVM: kvm: Nested Paging enabled May 13 00:49:35.117597 kernel: SVM: Virtual VMLOAD VMSAVE supported May 13 00:49:35.117609 kernel: SVM: Virtual GIF supported May 13 00:49:35.133969 kernel: EDAC MC: Ver: 3.0.0 May 13 00:49:35.159357 systemd[1]: Finished systemd-udev-settle.service. May 13 00:49:35.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.161474 systemd[1]: Starting lvm2-activation-early.service... May 13 00:49:35.168672 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:49:35.195593 systemd[1]: Finished lvm2-activation-early.service. May 13 00:49:35.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.196605 systemd[1]: Reached target cryptsetup.target. May 13 00:49:35.198371 systemd[1]: Starting lvm2-activation.service... May 13 00:49:35.202005 lvm[1114]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:49:35.232879 systemd[1]: Finished lvm2-activation.service. May 13 00:49:35.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.233821 systemd[1]: Reached target local-fs-pre.target. May 13 00:49:35.234781 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:49:35.234804 systemd[1]: Reached target local-fs.target. May 13 00:49:35.235616 systemd[1]: Reached target machines.target. May 13 00:49:35.237344 systemd[1]: Starting ldconfig.service... May 13 00:49:35.238323 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:35.238365 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:35.239190 systemd[1]: Starting systemd-boot-update.service... May 13 00:49:35.241006 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:49:35.242993 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:49:35.245247 systemd[1]: Starting systemd-sysext.service... May 13 00:49:35.246573 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1117 (bootctl) May 13 00:49:35.247533 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:49:35.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.253053 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:49:35.259367 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:49:35.263543 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:49:35.263768 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:49:35.272978 kernel: loop0: detected capacity change from 0 to 210664 May 13 00:49:35.280671 systemd-fsck[1125]: fsck.fat 4.2 (2021-01-31) May 13 00:49:35.280671 systemd-fsck[1125]: /dev/vda1: 791 files, 120712/258078 clusters May 13 00:49:35.281851 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:49:35.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.284429 systemd[1]: Mounting boot.mount... May 13 00:49:35.297792 systemd[1]: Mounted boot.mount. May 13 00:49:35.494969 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:49:35.499645 systemd[1]: Finished systemd-boot-update.service. May 13 00:49:35.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.501794 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:49:35.502937 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:49:35.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.510962 kernel: loop1: detected capacity change from 0 to 210664 May 13 00:49:35.515440 (sd-sysext)[1138]: Using extensions 'kubernetes'. May 13 00:49:35.515765 (sd-sysext)[1138]: Merged extensions into '/usr'. May 13 00:49:35.532896 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:35.534187 systemd[1]: Mounting usr-share-oem.mount... May 13 00:49:35.535227 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:35.536158 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:35.537837 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:35.539661 systemd[1]: Starting modprobe@loop.service... May 13 00:49:35.540468 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:35.540572 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:35.540661 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:35.543130 systemd[1]: Mounted usr-share-oem.mount. May 13 00:49:35.544387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:35.544521 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:35.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.545747 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:35.545866 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:35.545923 ldconfig[1116]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:49:35.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.547191 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:35.547328 systemd[1]: Finished modprobe@loop.service. May 13 00:49:35.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.548577 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:49:35.548664 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:49:35.549476 systemd[1]: Finished systemd-sysext.service. May 13 00:49:35.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.551483 systemd[1]: Starting ensure-sysext.service... May 13 00:49:35.553146 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:49:35.554408 systemd[1]: Finished ldconfig.service. May 13 00:49:35.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.558324 systemd[1]: Reloading. May 13 00:49:35.561803 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:49:35.562740 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:49:35.564055 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:49:35.603897 /usr/lib/systemd/system-generators/torcx-generator[1175]: time="2025-05-13T00:49:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:49:35.603925 /usr/lib/systemd/system-generators/torcx-generator[1175]: time="2025-05-13T00:49:35Z" level=info msg="torcx already run" May 13 00:49:35.669480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:49:35.669496 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:49:35.686083 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:49:35.741215 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:49:35.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.744812 systemd[1]: Starting audit-rules.service... May 13 00:49:35.746519 systemd[1]: Starting clean-ca-certificates.service... May 13 00:49:35.748456 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:49:35.750774 systemd[1]: Starting systemd-resolved.service... May 13 00:49:35.752710 systemd[1]: Starting systemd-timesyncd.service... May 13 00:49:35.754696 systemd[1]: Starting systemd-update-utmp.service... May 13 00:49:35.757159 systemd[1]: Finished clean-ca-certificates.service. May 13 00:49:35.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.762131 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:35.762330 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:35.761000 audit[1234]: SYSTEM_BOOT pid=1234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:49:35.763405 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:35.765167 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:35.766904 systemd[1]: Starting modprobe@loop.service... May 13 00:49:35.767767 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:35.767870 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:35.767985 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:49:35.768049 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:35.768912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:35.769058 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:35.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.770593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:35.770710 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:35.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.772213 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:49:35.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.773585 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:35.773712 systemd[1]: Finished modprobe@loop.service. May 13 00:49:35.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.776514 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:49:35.776649 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:49:35.777823 systemd[1]: Starting systemd-update-done.service... May 13 00:49:35.781316 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:35.781535 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:35.783210 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:35.784903 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:35.786640 systemd[1]: Starting modprobe@loop.service... May 13 00:49:35.787498 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:35.787604 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:35.787695 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:49:35.787755 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:35.789085 systemd[1]: Finished systemd-update-utmp.service. May 13 00:49:35.791437 systemd[1]: Finished systemd-update-done.service. May 13 00:49:35.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.793250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:35.793390 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:35.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.795165 augenrules[1260]: No rules May 13 00:49:35.794000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:49:35.794000 audit[1260]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff89a6de80 a2=420 a3=0 items=0 ppid=1222 pid=1260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:35.794000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:49:35.795956 systemd[1]: Finished audit-rules.service. May 13 00:49:35.797034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:35.797174 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:35.798375 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:35.798511 systemd[1]: Finished modprobe@loop.service. May 13 00:49:35.802877 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:35.803104 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:35.804208 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:35.805880 systemd[1]: Starting modprobe@drm.service... May 13 00:49:35.808022 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:35.809687 systemd[1]: Starting modprobe@loop.service... May 13 00:49:35.810504 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:35.810606 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:35.811675 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:49:35.812685 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:49:35.812772 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:35.813738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:35.813883 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:35.815208 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:49:35.815339 systemd[1]: Finished modprobe@drm.service. May 13 00:49:35.816466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:35.816584 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:35.817814 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:35.817996 systemd[1]: Finished modprobe@loop.service. May 13 00:49:35.819283 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:49:35.819372 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:49:35.820330 systemd[1]: Finished ensure-sysext.service. May 13 00:49:35.838802 systemd[1]: Started systemd-timesyncd.service. May 13 00:49:35.840136 systemd-timesyncd[1233]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:49:35.840170 systemd[1]: Reached target time-set.target. May 13 00:49:35.840177 systemd-timesyncd[1233]: Initial clock synchronization to Tue 2025-05-13 00:49:36.156262 UTC. May 13 00:49:35.841060 systemd-resolved[1228]: Positive Trust Anchors: May 13 00:49:35.841294 systemd-resolved[1228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:49:35.841396 systemd-resolved[1228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:49:35.848231 systemd-resolved[1228]: Defaulting to hostname 'linux'. May 13 00:49:35.849490 systemd[1]: Started systemd-resolved.service. May 13 00:49:35.850389 systemd[1]: Reached target network.target. May 13 00:49:35.851194 systemd[1]: Reached target nss-lookup.target. May 13 00:49:35.852034 systemd[1]: Reached target sysinit.target. May 13 00:49:35.852882 systemd[1]: Started motdgen.path. May 13 00:49:35.853625 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:49:35.854863 systemd[1]: Started logrotate.timer. May 13 00:49:35.855684 systemd[1]: Started mdadm.timer. May 13 00:49:35.856397 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:49:35.857275 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:49:35.857340 systemd[1]: Reached target paths.target. May 13 00:49:35.858117 systemd[1]: Reached target timers.target. May 13 00:49:35.859165 systemd[1]: Listening on dbus.socket. May 13 00:49:35.860826 systemd[1]: Starting docker.socket... May 13 00:49:35.862371 systemd[1]: Listening on sshd.socket. May 13 00:49:35.863217 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:35.863437 systemd[1]: Listening on docker.socket. May 13 00:49:35.864234 systemd[1]: Reached target sockets.target. May 13 00:49:35.865026 systemd[1]: Reached target basic.target. May 13 00:49:35.865900 systemd[1]: System is tainted: cgroupsv1 May 13 00:49:35.865938 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:49:35.865967 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:49:35.866775 systemd[1]: Starting containerd.service... May 13 00:49:35.868398 systemd[1]: Starting dbus.service... May 13 00:49:35.869929 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:49:35.871718 systemd[1]: Starting extend-filesystems.service... May 13 00:49:35.872732 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:49:35.874683 jq[1286]: false May 13 00:49:35.873646 systemd[1]: Starting motdgen.service... May 13 00:49:35.875326 systemd[1]: Starting prepare-helm.service... May 13 00:49:35.877163 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:49:35.879156 systemd[1]: Starting sshd-keygen.service... May 13 00:49:35.881574 systemd[1]: Starting systemd-logind.service... May 13 00:49:35.882342 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:35.882391 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:49:35.883303 systemd[1]: Starting update-engine.service... May 13 00:49:35.885758 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:49:35.887938 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:49:35.896134 jq[1300]: true May 13 00:49:35.891211 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:49:35.892235 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:49:35.892669 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:49:35.902263 jq[1312]: true May 13 00:49:35.904260 extend-filesystems[1287]: Found loop1 May 13 00:49:35.905139 extend-filesystems[1287]: Found sr0 May 13 00:49:35.905139 extend-filesystems[1287]: Found vda May 13 00:49:35.905139 extend-filesystems[1287]: Found vda1 May 13 00:49:35.905139 extend-filesystems[1287]: Found vda2 May 13 00:49:35.905139 extend-filesystems[1287]: Found vda3 May 13 00:49:35.905139 extend-filesystems[1287]: Found usr May 13 00:49:35.905139 extend-filesystems[1287]: Found vda4 May 13 00:49:35.905139 extend-filesystems[1287]: Found vda6 May 13 00:49:35.905139 extend-filesystems[1287]: Found vda7 May 13 00:49:35.905139 extend-filesystems[1287]: Found vda9 May 13 00:49:35.905139 extend-filesystems[1287]: Checking size of /dev/vda9 May 13 00:49:35.922677 tar[1310]: linux-amd64/helm May 13 00:49:35.916122 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:49:35.916573 systemd[1]: Finished motdgen.service. May 13 00:49:35.925848 dbus-daemon[1285]: [system] SELinux support is enabled May 13 00:49:35.926020 systemd[1]: Started dbus.service. May 13 00:49:35.928593 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:49:35.928619 systemd[1]: Reached target system-config.target. May 13 00:49:35.929538 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:49:35.929552 systemd[1]: Reached target user-config.target. May 13 00:49:35.932078 update_engine[1298]: I0513 00:49:35.931913 1298 main.cc:92] Flatcar Update Engine starting May 13 00:49:35.946465 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:49:35.946515 extend-filesystems[1287]: Resized partition /dev/vda9 May 13 00:49:35.944097 systemd[1]: Started update-engine.service. May 13 00:49:35.948021 update_engine[1298]: I0513 00:49:35.944156 1298 update_check_scheduler.cc:74] Next update check in 11m51s May 13 00:49:35.948048 extend-filesystems[1345]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:49:35.951043 env[1313]: time="2025-05-13T00:49:35.947840344Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:49:35.948870 systemd[1]: Started locksmithd.service. May 13 00:49:35.950911 systemd-logind[1296]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:49:35.950926 systemd-logind[1296]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:49:35.951644 systemd-logind[1296]: New seat seat0. May 13 00:49:35.954490 systemd[1]: Started systemd-logind.service. May 13 00:49:35.967971 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:49:35.974461 env[1313]: time="2025-05-13T00:49:35.974419760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:49:35.987347 extend-filesystems[1345]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:49:35.987347 extend-filesystems[1345]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:49:35.987347 extend-filesystems[1345]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:49:35.992601 extend-filesystems[1287]: Resized filesystem in /dev/vda9 May 13 00:49:35.994800 bash[1339]: Updated "/home/core/.ssh/authorized_keys" May 13 00:49:35.987979 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.989801672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.992061050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.992091527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.992387893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.992404684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.992415885Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.992424622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.992516935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.992742588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:49:35.996254 env[1313]: time="2025-05-13T00:49:35.992902758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:49:35.988213 systemd[1]: Finished extend-filesystems.service. May 13 00:49:35.996556 env[1313]: time="2025-05-13T00:49:35.992927615Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:49:35.996556 env[1313]: time="2025-05-13T00:49:35.992985403Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:49:35.996556 env[1313]: time="2025-05-13T00:49:35.992998758Z" level=info msg="metadata content store policy set" policy=shared May 13 00:49:35.992698 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:49:35.998965 env[1313]: time="2025-05-13T00:49:35.998906069Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:49:35.998965 env[1313]: time="2025-05-13T00:49:35.998937338Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:49:35.998965 env[1313]: time="2025-05-13T00:49:35.998960030Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:49:35.999076 env[1313]: time="2025-05-13T00:49:35.998990137Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:49:35.999076 env[1313]: time="2025-05-13T00:49:35.999003842Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:49:35.999076 env[1313]: time="2025-05-13T00:49:35.999016025Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:49:35.999076 env[1313]: time="2025-05-13T00:49:35.999030482Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:49:35.999076 env[1313]: time="2025-05-13T00:49:35.999042385Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:49:35.999076 env[1313]: time="2025-05-13T00:49:35.999054507Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:49:35.999076 env[1313]: time="2025-05-13T00:49:35.999076479Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:49:35.999209 env[1313]: time="2025-05-13T00:49:35.999087890Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:49:35.999209 env[1313]: time="2025-05-13T00:49:35.999100253Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:49:35.999209 env[1313]: time="2025-05-13T00:49:35.999174492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:49:35.999270 env[1313]: time="2025-05-13T00:49:35.999235507Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000249168Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000318909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000336963Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000404059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000421942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000437662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000451658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000467458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000482626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000496182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000511350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.000623290Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.001039037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.001190854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002049 env[1313]: time="2025-05-13T00:49:36.001255238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002360 env[1313]: time="2025-05-13T00:49:36.001442913Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:49:36.002360 env[1313]: time="2025-05-13T00:49:36.001536417Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:49:36.002360 env[1313]: time="2025-05-13T00:49:36.001615184Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:49:36.002360 env[1313]: time="2025-05-13T00:49:36.001641241Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:49:36.002360 env[1313]: time="2025-05-13T00:49:36.001683276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:49:36.002467 env[1313]: time="2025-05-13T00:49:36.001981368Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:49:36.002467 env[1313]: time="2025-05-13T00:49:36.002068258Z" level=info msg="Connect containerd service" May 13 00:49:36.002467 env[1313]: time="2025-05-13T00:49:36.002126810Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:49:36.005527 env[1313]: time="2025-05-13T00:49:36.002873466Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:49:36.005527 env[1313]: time="2025-05-13T00:49:36.003090177Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:49:36.005527 env[1313]: time="2025-05-13T00:49:36.003119568Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:49:36.005527 env[1313]: time="2025-05-13T00:49:36.003165737Z" level=info msg="containerd successfully booted in 0.055934s" May 13 00:49:36.003250 systemd[1]: Started containerd.service. May 13 00:49:36.006964 env[1313]: time="2025-05-13T00:49:36.006938628Z" level=info msg="Start subscribing containerd event" May 13 00:49:36.018711 env[1313]: time="2025-05-13T00:49:36.018420263Z" level=info msg="Start recovering state" May 13 00:49:36.018711 env[1313]: time="2025-05-13T00:49:36.018508955Z" level=info msg="Start event monitor" May 13 00:49:36.018711 env[1313]: time="2025-05-13T00:49:36.018526733Z" level=info msg="Start snapshots syncer" May 13 00:49:36.018711 env[1313]: time="2025-05-13T00:49:36.018536397Z" level=info msg="Start cni network conf syncer for default" May 13 00:49:36.018711 env[1313]: time="2025-05-13T00:49:36.018549562Z" level=info msg="Start streaming server" May 13 00:49:36.025479 locksmithd[1346]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:49:36.293127 systemd-networkd[1088]: eth0: Gained IPv6LL May 13 00:49:36.295302 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:49:36.296746 systemd[1]: Reached target network-online.target. May 13 00:49:36.299017 systemd[1]: Starting kubelet.service... May 13 00:49:36.326636 tar[1310]: linux-amd64/LICENSE May 13 00:49:36.326743 tar[1310]: linux-amd64/README.md May 13 00:49:36.330846 systemd[1]: Finished prepare-helm.service. May 13 00:49:36.487728 sshd_keygen[1317]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:49:36.506318 systemd[1]: Finished sshd-keygen.service. May 13 00:49:36.508783 systemd[1]: Starting issuegen.service... May 13 00:49:36.513931 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:49:36.514166 systemd[1]: Finished issuegen.service. May 13 00:49:36.516188 systemd[1]: Starting systemd-user-sessions.service... May 13 00:49:36.522649 systemd[1]: Finished systemd-user-sessions.service. May 13 00:49:36.525016 systemd[1]: Started getty@tty1.service. May 13 00:49:36.526768 systemd[1]: Started serial-getty@ttyS0.service. May 13 00:49:36.527836 systemd[1]: Reached target getty.target. May 13 00:49:36.874381 systemd[1]: Started kubelet.service. May 13 00:49:36.876055 systemd[1]: Reached target multi-user.target. May 13 00:49:36.878338 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:49:36.883451 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:49:36.883659 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:49:36.886608 systemd[1]: Startup finished in 4.907s (kernel) + 4.903s (userspace) = 9.811s. May 13 00:49:37.316039 kubelet[1388]: E0513 00:49:37.315919 1388 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:49:37.317884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:49:37.318059 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:49:40.174631 systemd[1]: Created slice system-sshd.slice. May 13 00:49:40.175551 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:46142.service. May 13 00:49:40.207680 sshd[1399]: Accepted publickey for core from 10.0.0.1 port 46142 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:49:40.208903 sshd[1399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:40.215730 systemd[1]: Created slice user-500.slice. May 13 00:49:40.216491 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:49:40.217896 systemd-logind[1296]: New session 1 of user core. May 13 00:49:40.224502 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:49:40.225618 systemd[1]: Starting user@500.service... May 13 00:49:40.228107 (systemd)[1403]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:40.293834 systemd[1403]: Queued start job for default target default.target. May 13 00:49:40.294011 systemd[1403]: Reached target paths.target. May 13 00:49:40.294026 systemd[1403]: Reached target sockets.target. May 13 00:49:40.294037 systemd[1403]: Reached target timers.target. May 13 00:49:40.294047 systemd[1403]: Reached target basic.target. May 13 00:49:40.294082 systemd[1403]: Reached target default.target. May 13 00:49:40.294105 systemd[1403]: Startup finished in 61ms. May 13 00:49:40.294224 systemd[1]: Started user@500.service. May 13 00:49:40.295442 systemd[1]: Started session-1.scope. May 13 00:49:40.345591 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:46156.service. May 13 00:49:40.377374 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 46156 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:49:40.378435 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:40.381819 systemd-logind[1296]: New session 2 of user core. May 13 00:49:40.382505 systemd[1]: Started session-2.scope. May 13 00:49:40.435423 sshd[1413]: pam_unix(sshd:session): session closed for user core May 13 00:49:40.437488 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:46158.service. May 13 00:49:40.437873 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:46156.service: Deactivated successfully. May 13 00:49:40.438776 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:49:40.439215 systemd-logind[1296]: Session 2 logged out. Waiting for processes to exit. May 13 00:49:40.439907 systemd-logind[1296]: Removed session 2. May 13 00:49:40.469290 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 46158 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:49:40.470232 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:40.473043 systemd-logind[1296]: New session 3 of user core. May 13 00:49:40.473693 systemd[1]: Started session-3.scope. May 13 00:49:40.522173 sshd[1418]: pam_unix(sshd:session): session closed for user core May 13 00:49:40.523987 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:46170.service. May 13 00:49:40.524752 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:46158.service: Deactivated successfully. May 13 00:49:40.525669 systemd-logind[1296]: Session 3 logged out. Waiting for processes to exit. May 13 00:49:40.525716 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:49:40.526531 systemd-logind[1296]: Removed session 3. May 13 00:49:40.552751 sshd[1425]: Accepted publickey for core from 10.0.0.1 port 46170 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:49:40.553528 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:40.556380 systemd-logind[1296]: New session 4 of user core. May 13 00:49:40.556990 systemd[1]: Started session-4.scope. May 13 00:49:40.610010 sshd[1425]: pam_unix(sshd:session): session closed for user core May 13 00:49:40.612234 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:46182.service. May 13 00:49:40.612634 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:46170.service: Deactivated successfully. May 13 00:49:40.613747 systemd-logind[1296]: Session 4 logged out. Waiting for processes to exit. May 13 00:49:40.613799 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:49:40.614776 systemd-logind[1296]: Removed session 4. May 13 00:49:40.644716 sshd[1432]: Accepted publickey for core from 10.0.0.1 port 46182 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:49:40.645828 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:40.648909 systemd-logind[1296]: New session 5 of user core. May 13 00:49:40.649567 systemd[1]: Started session-5.scope. May 13 00:49:40.706326 sudo[1438]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:49:40.706506 sudo[1438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:49:40.715198 dbus-daemon[1285]: \xd0\u001d\x93@\xb0U: received setenforce notice (enforcing=-267429920) May 13 00:49:40.717200 sudo[1438]: pam_unix(sudo:session): session closed for user root May 13 00:49:40.718784 sshd[1432]: pam_unix(sshd:session): session closed for user core May 13 00:49:40.721193 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:46192.service. May 13 00:49:40.721584 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:46182.service: Deactivated successfully. May 13 00:49:40.722642 systemd-logind[1296]: Session 5 logged out. Waiting for processes to exit. May 13 00:49:40.722727 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:49:40.723662 systemd-logind[1296]: Removed session 5. May 13 00:49:40.752296 sshd[1441]: Accepted publickey for core from 10.0.0.1 port 46192 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:49:40.753271 sshd[1441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:40.756207 systemd-logind[1296]: New session 6 of user core. May 13 00:49:40.756864 systemd[1]: Started session-6.scope. May 13 00:49:40.812331 sudo[1447]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:49:40.812526 sudo[1447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:49:40.814687 sudo[1447]: pam_unix(sudo:session): session closed for user root May 13 00:49:40.818799 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:49:40.818984 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:49:40.826627 systemd[1]: Stopping audit-rules.service... May 13 00:49:40.826000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 13 00:49:40.828732 kernel: kauditd_printk_skb: 176 callbacks suppressed May 13 00:49:40.828761 kernel: audit: type=1305 audit(1747097380.826:145): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 13 00:49:40.828891 auditctl[1450]: No rules May 13 00:49:40.829225 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:49:40.829451 systemd[1]: Stopped audit-rules.service. May 13 00:49:40.826000 audit[1450]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff8e590c30 a2=420 a3=0 items=0 ppid=1 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:40.831042 systemd[1]: Starting audit-rules.service... May 13 00:49:40.835065 kernel: audit: type=1300 audit(1747097380.826:145): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff8e590c30 a2=420 a3=0 items=0 ppid=1 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:40.835988 kernel: audit: type=1327 audit(1747097380.826:145): proctitle=2F7362696E2F617564697463746C002D44 May 13 00:49:40.826000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 13 00:49:40.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.840445 kernel: audit: type=1131 audit(1747097380.828:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.848178 augenrules[1468]: No rules May 13 00:49:40.848811 systemd[1]: Finished audit-rules.service. May 13 00:49:40.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.849577 sudo[1446]: pam_unix(sudo:session): session closed for user root May 13 00:49:40.851462 sshd[1441]: pam_unix(sshd:session): session closed for user core May 13 00:49:40.848000 audit[1446]: USER_END pid=1446 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:49:40.853768 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:46196.service. May 13 00:49:40.856439 kernel: audit: type=1130 audit(1747097380.848:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.856466 kernel: audit: type=1106 audit(1747097380.848:148): pid=1446 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:49:40.856481 kernel: audit: type=1104 audit(1747097380.848:149): pid=1446 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:49:40.848000 audit[1446]: CRED_DISP pid=1446 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:49:40.854158 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:46192.service: Deactivated successfully. May 13 00:49:40.855003 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:49:40.855630 systemd-logind[1296]: Session 6 logged out. Waiting for processes to exit. May 13 00:49:40.856616 systemd-logind[1296]: Removed session 6. May 13 00:49:40.850000 audit[1441]: USER_END pid=1441 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:49:40.864123 kernel: audit: type=1106 audit(1747097380.850:150): pid=1441 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:49:40.850000 audit[1441]: CRED_DISP pid=1441 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:49:40.867683 kernel: audit: type=1104 audit(1747097380.850:151): pid=1441 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:49:40.867714 kernel: audit: type=1130 audit(1747097380.851:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.140:22-10.0.0.1:46196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.140:22-10.0.0.1:46196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.140:22-10.0.0.1:46192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.884000 audit[1474]: USER_ACCT pid=1474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:49:40.885773 sshd[1474]: Accepted publickey for core from 10.0.0.1 port 46196 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:49:40.885000 audit[1474]: CRED_ACQ pid=1474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:49:40.885000 audit[1474]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe546f3d80 a2=3 a3=0 items=0 ppid=1 pid=1474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:40.885000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:49:40.886576 sshd[1474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:40.889414 systemd-logind[1296]: New session 7 of user core. May 13 00:49:40.890045 systemd[1]: Started session-7.scope. May 13 00:49:40.892000 audit[1474]: USER_START pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:49:40.893000 audit[1478]: CRED_ACQ pid=1478 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:49:40.938000 audit[1479]: USER_ACCT pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:49:40.941026 sudo[1479]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:49:40.939000 audit[1479]: CRED_REFR pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:49:40.941203 sudo[1479]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:49:40.941000 audit[1479]: USER_START pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:49:40.959473 systemd[1]: Starting docker.service... May 13 00:49:40.990180 env[1491]: time="2025-05-13T00:49:40.990112821Z" level=info msg="Starting up" May 13 00:49:40.991458 env[1491]: time="2025-05-13T00:49:40.991424794Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:49:40.991458 env[1491]: time="2025-05-13T00:49:40.991440407Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:49:40.991458 env[1491]: time="2025-05-13T00:49:40.991457680Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:49:40.991458 env[1491]: time="2025-05-13T00:49:40.991466977Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:49:40.992869 env[1491]: time="2025-05-13T00:49:40.992832246Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:49:40.992869 env[1491]: time="2025-05-13T00:49:40.992857965Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:49:40.992969 env[1491]: time="2025-05-13T00:49:40.992876017Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:49:40.992969 env[1491]: time="2025-05-13T00:49:40.992885386Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:49:41.582563 env[1491]: time="2025-05-13T00:49:41.582516724Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 13 00:49:41.582563 env[1491]: time="2025-05-13T00:49:41.582543782Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 13 00:49:41.582768 env[1491]: time="2025-05-13T00:49:41.582685580Z" level=info msg="Loading containers: start." May 13 00:49:41.630000 audit[1526]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.630000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffcc1a8e50 a2=0 a3=7fffcc1a8e3c items=0 ppid=1491 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.630000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 13 00:49:41.632000 audit[1528]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.632000 audit[1528]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff04190f90 a2=0 a3=7fff04190f7c items=0 ppid=1491 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.632000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 13 00:49:41.633000 audit[1530]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.633000 audit[1530]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe7c841670 a2=0 a3=7ffe7c84165c items=0 ppid=1491 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.633000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 13 00:49:41.634000 audit[1532]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.634000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc5b6127a0 a2=0 a3=7ffc5b61278c items=0 ppid=1491 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.634000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 13 00:49:41.637000 audit[1534]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.637000 audit[1534]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc389a3ba0 a2=0 a3=7ffc389a3b8c items=0 ppid=1491 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.637000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 13 00:49:41.656000 audit[1539]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.656000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd87a20380 a2=0 a3=7ffd87a2036c items=0 ppid=1491 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.656000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 13 00:49:41.804000 audit[1541]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.804000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe6023b060 a2=0 a3=7ffe6023b04c items=0 ppid=1491 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.804000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 13 00:49:41.805000 audit[1543]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.805000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffeb3505620 a2=0 a3=7ffeb350560c items=0 ppid=1491 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.805000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 13 00:49:41.807000 audit[1545]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.807000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe1d86dec0 a2=0 a3=7ffe1d86deac items=0 ppid=1491 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.807000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 13 00:49:41.814000 audit[1549]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.814000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdd3b5e310 a2=0 a3=7ffdd3b5e2fc items=0 ppid=1491 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.814000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 13 00:49:41.821000 audit[1550]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.821000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff8f864560 a2=0 a3=7fff8f86454c items=0 ppid=1491 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.821000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 13 00:49:41.830982 kernel: Initializing XFRM netlink socket May 13 00:49:41.857065 env[1491]: time="2025-05-13T00:49:41.856996255Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:49:41.870000 audit[1558]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.870000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe93a59190 a2=0 a3=7ffe93a5917c items=0 ppid=1491 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.870000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 13 00:49:41.880000 audit[1561]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.880000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffff0f7a9b0 a2=0 a3=7ffff0f7a99c items=0 ppid=1491 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.880000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 13 00:49:41.882000 audit[1564]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.882000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffea64c13a0 a2=0 a3=7ffea64c138c items=0 ppid=1491 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.882000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 13 00:49:41.883000 audit[1566]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.883000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff444d6d00 a2=0 a3=7fff444d6cec items=0 ppid=1491 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.883000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 13 00:49:41.885000 audit[1568]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.885000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffce4399710 a2=0 a3=7ffce43996fc items=0 ppid=1491 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.885000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 13 00:49:41.885000 audit[1570]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.885000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffdf92590d0 a2=0 a3=7ffdf92590bc items=0 ppid=1491 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.885000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 13 00:49:41.886000 audit[1572]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.886000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffdd82463e0 a2=0 a3=7ffdd82463cc items=0 ppid=1491 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.886000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 13 00:49:41.893000 audit[1575]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.893000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffcb8ba5fe0 a2=0 a3=7ffcb8ba5fcc items=0 ppid=1491 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.893000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 13 00:49:41.895000 audit[1577]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.895000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe2a39e030 a2=0 a3=7ffe2a39e01c items=0 ppid=1491 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.895000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 13 00:49:41.896000 audit[1579]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.896000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe9071be60 a2=0 a3=7ffe9071be4c items=0 ppid=1491 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.896000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 13 00:49:41.897000 audit[1581]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.897000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffef8874050 a2=0 a3=7ffef887403c items=0 ppid=1491 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.897000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 13 00:49:41.899037 systemd-networkd[1088]: docker0: Link UP May 13 00:49:41.906000 audit[1585]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.906000 audit[1585]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcb095e580 a2=0 a3=7ffcb095e56c items=0 ppid=1491 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.906000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 13 00:49:41.911000 audit[1586]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:49:41.911000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdbc8bf740 a2=0 a3=7ffdbc8bf72c items=0 ppid=1491 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:41.911000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 13 00:49:41.913022 env[1491]: time="2025-05-13T00:49:41.912989999Z" level=info msg="Loading containers: done." May 13 00:49:41.925627 env[1491]: time="2025-05-13T00:49:41.925583176Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:49:41.925762 env[1491]: time="2025-05-13T00:49:41.925742577Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:49:41.925839 env[1491]: time="2025-05-13T00:49:41.925818115Z" level=info msg="Daemon has completed initialization" May 13 00:49:41.940762 systemd[1]: Started docker.service. May 13 00:49:41.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:41.947492 env[1491]: time="2025-05-13T00:49:41.947444704Z" level=info msg="API listen on /run/docker.sock" May 13 00:49:42.652223 env[1313]: time="2025-05-13T00:49:42.652171530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:49:43.271746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760426439.mount: Deactivated successfully. May 13 00:49:44.827646 env[1313]: time="2025-05-13T00:49:44.827581661Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:44.829541 env[1313]: time="2025-05-13T00:49:44.829499559Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:44.831087 env[1313]: time="2025-05-13T00:49:44.831048256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:44.834437 env[1313]: time="2025-05-13T00:49:44.834400070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:44.835154 env[1313]: time="2025-05-13T00:49:44.835121431Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 00:49:44.843416 env[1313]: time="2025-05-13T00:49:44.843384247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:49:46.992638 env[1313]: time="2025-05-13T00:49:46.992574511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:46.994467 env[1313]: time="2025-05-13T00:49:46.994403833Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:46.996114 env[1313]: time="2025-05-13T00:49:46.996051083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:46.997599 env[1313]: time="2025-05-13T00:49:46.997567443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:46.998244 env[1313]: time="2025-05-13T00:49:46.998205331Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 00:49:47.006448 env[1313]: time="2025-05-13T00:49:47.006410561Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:49:47.416075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:49:47.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:47.416248 systemd[1]: Stopped kubelet.service. May 13 00:49:47.417639 kernel: kauditd_printk_skb: 84 callbacks suppressed May 13 00:49:47.417738 kernel: audit: type=1130 audit(1747097387.415:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:47.417519 systemd[1]: Starting kubelet.service... May 13 00:49:47.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:47.424668 kernel: audit: type=1131 audit(1747097387.415:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:47.492919 systemd[1]: Started kubelet.service. May 13 00:49:47.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:47.497971 kernel: audit: type=1130 audit(1747097387.492:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:47.542959 kubelet[1649]: E0513 00:49:47.542915 1649 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:49:47.546104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:49:47.546236 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:49:47.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 00:49:47.551971 kernel: audit: type=1131 audit(1747097387.545:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 00:49:48.993489 env[1313]: time="2025-05-13T00:49:48.993425892Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:48.995323 env[1313]: time="2025-05-13T00:49:48.995274714Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:48.998763 env[1313]: time="2025-05-13T00:49:48.998738224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:49.000518 env[1313]: time="2025-05-13T00:49:49.000483618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:49.001200 env[1313]: time="2025-05-13T00:49:49.001157131Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 00:49:49.009238 env[1313]: time="2025-05-13T00:49:49.009207708Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:49:50.458854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3226377669.mount: Deactivated successfully. May 13 00:49:52.201750 env[1313]: time="2025-05-13T00:49:52.201683674Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:52.268352 env[1313]: time="2025-05-13T00:49:52.268300941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:52.308870 env[1313]: time="2025-05-13T00:49:52.308828473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:52.348990 env[1313]: time="2025-05-13T00:49:52.348960228Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:52.349387 env[1313]: time="2025-05-13T00:49:52.349366564Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 00:49:52.357746 env[1313]: time="2025-05-13T00:49:52.357705578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:49:52.975198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543906910.mount: Deactivated successfully. May 13 00:49:54.016178 env[1313]: time="2025-05-13T00:49:54.016118337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:54.018052 env[1313]: time="2025-05-13T00:49:54.017997329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:54.019750 env[1313]: time="2025-05-13T00:49:54.019706065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:54.021285 env[1313]: time="2025-05-13T00:49:54.021254240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:54.021968 env[1313]: time="2025-05-13T00:49:54.021920930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:49:54.030080 env[1313]: time="2025-05-13T00:49:54.030062210Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:49:54.529253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2775994384.mount: Deactivated successfully. May 13 00:49:54.534051 env[1313]: time="2025-05-13T00:49:54.534018271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:54.535722 env[1313]: time="2025-05-13T00:49:54.535682547Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:54.537009 env[1313]: time="2025-05-13T00:49:54.536981750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:54.538337 env[1313]: time="2025-05-13T00:49:54.538314085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:54.538748 env[1313]: time="2025-05-13T00:49:54.538713865Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 00:49:54.546938 env[1313]: time="2025-05-13T00:49:54.546910435Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:49:55.996297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569234646.mount: Deactivated successfully. May 13 00:49:57.665866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:49:57.666062 systemd[1]: Stopped kubelet.service. May 13 00:49:57.674058 kernel: audit: type=1130 audit(1747097397.665:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:57.674174 kernel: audit: type=1131 audit(1747097397.665:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:57.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:57.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:57.667324 systemd[1]: Starting kubelet.service... May 13 00:49:57.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:57.740507 systemd[1]: Started kubelet.service. May 13 00:49:57.744984 kernel: audit: type=1130 audit(1747097397.739:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:57.779864 kubelet[1692]: E0513 00:49:57.779816 1692 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:49:57.781670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:49:57.781778 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:49:57.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 00:49:57.785975 kernel: audit: type=1131 audit(1747097397.781:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 00:49:59.610450 env[1313]: time="2025-05-13T00:49:59.610397055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:59.612206 env[1313]: time="2025-05-13T00:49:59.612163443Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:59.613840 env[1313]: time="2025-05-13T00:49:59.613784283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:59.615530 env[1313]: time="2025-05-13T00:49:59.615483040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:59.616156 env[1313]: time="2025-05-13T00:49:59.616126904Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 00:50:01.725122 systemd[1]: Stopped kubelet.service. May 13 00:50:01.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:01.727305 systemd[1]: Starting kubelet.service... May 13 00:50:01.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:01.732322 kernel: audit: type=1130 audit(1747097401.724:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:01.732428 kernel: audit: type=1131 audit(1747097401.724:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:01.742378 systemd[1]: Reloading. May 13 00:50:01.799009 /usr/lib/systemd/system-generators/torcx-generator[1802]: time="2025-05-13T00:50:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:50:01.799349 /usr/lib/systemd/system-generators/torcx-generator[1802]: time="2025-05-13T00:50:01Z" level=info msg="torcx already run" May 13 00:50:02.059365 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:50:02.059380 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:50:02.076011 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:50:02.146647 systemd[1]: Started kubelet.service. May 13 00:50:02.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:02.148124 systemd[1]: Stopping kubelet.service... May 13 00:50:02.148377 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:50:02.148591 systemd[1]: Stopped kubelet.service. May 13 00:50:02.149808 systemd[1]: Starting kubelet.service... May 13 00:50:02.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:02.154917 kernel: audit: type=1130 audit(1747097402.145:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:02.154979 kernel: audit: type=1131 audit(1747097402.147:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:02.219827 systemd[1]: Started kubelet.service. May 13 00:50:02.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:02.225961 kernel: audit: type=1130 audit(1747097402.220:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:02.252376 kubelet[1864]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:50:02.252376 kubelet[1864]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:50:02.252376 kubelet[1864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:50:02.252734 kubelet[1864]: I0513 00:50:02.252404 1864 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:50:02.550469 kubelet[1864]: I0513 00:50:02.550425 1864 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:50:02.550469 kubelet[1864]: I0513 00:50:02.550456 1864 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:50:02.550700 kubelet[1864]: I0513 00:50:02.550677 1864 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:50:02.563660 kubelet[1864]: I0513 00:50:02.563620 1864 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:50:02.564020 kubelet[1864]: E0513 00:50:02.563994 1864 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:02.572084 kubelet[1864]: I0513 00:50:02.572047 1864 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:50:02.573139 kubelet[1864]: I0513 00:50:02.573107 1864 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:50:02.573296 kubelet[1864]: I0513 00:50:02.573134 1864 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:50:02.573674 kubelet[1864]: I0513 00:50:02.573656 1864 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:50:02.573674 kubelet[1864]: I0513 00:50:02.573670 1864 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:50:02.573774 kubelet[1864]: I0513 00:50:02.573757 1864 state_mem.go:36] "Initialized new in-memory state store" May 13 00:50:02.574341 kubelet[1864]: I0513 00:50:02.574323 1864 kubelet.go:400] "Attempting to sync node with API server" May 13 00:50:02.574341 kubelet[1864]: I0513 00:50:02.574339 1864 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:50:02.574396 kubelet[1864]: I0513 00:50:02.574357 1864 kubelet.go:312] "Adding apiserver pod source" May 13 00:50:02.574396 kubelet[1864]: I0513 00:50:02.574370 1864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:50:02.576525 kubelet[1864]: W0513 00:50:02.576481 1864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:02.576565 kubelet[1864]: E0513 00:50:02.576544 1864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:02.585556 kubelet[1864]: W0513 00:50:02.585512 1864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:02.585556 kubelet[1864]: E0513 00:50:02.585550 1864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:02.585556 kubelet[1864]: I0513 00:50:02.585553 1864 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:50:02.588356 kubelet[1864]: I0513 00:50:02.588328 1864 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:50:02.588438 kubelet[1864]: W0513 00:50:02.588378 1864 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:50:02.588892 kubelet[1864]: I0513 00:50:02.588871 1864 server.go:1264] "Started kubelet" May 13 00:50:02.589510 kubelet[1864]: I0513 00:50:02.589106 1864 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:50:02.589510 kubelet[1864]: I0513 00:50:02.589367 1864 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:50:02.589510 kubelet[1864]: I0513 00:50:02.589392 1864 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:50:02.589000 audit[1864]: AVC avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:02.591041 kubelet[1864]: I0513 00:50:02.590075 1864 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 13 00:50:02.591041 kubelet[1864]: I0513 00:50:02.590102 1864 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 13 00:50:02.591041 kubelet[1864]: I0513 00:50:02.590150 1864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:50:02.591041 kubelet[1864]: I0513 00:50:02.590154 1864 server.go:455] "Adding debug handlers to kubelet server" May 13 00:50:02.589000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 00:50:02.589000 audit[1864]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ce70e0 a1=c000a6a978 a2=c000ce70b0 a3=25 items=0 ppid=1 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.589000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 00:50:02.589000 audit[1864]: AVC avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:02.589000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 00:50:02.589000 audit[1864]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a88740 a1=c000a6a990 a2=c000ce7170 a3=25 items=0 ppid=1 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.589000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 00:50:02.591000 audit[1876]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:02.591000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd23c120c0 a2=0 a3=7ffd23c120ac items=0 ppid=1864 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 13 00:50:02.592000 audit[1877]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:02.592000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff944f8240 a2=0 a3=7fff944f822c items=0 ppid=1864 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 13 00:50:02.593959 kernel: audit: type=1400 audit(1747097402.589:200): avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:02.594602 kubelet[1864]: I0513 00:50:02.594591 1864 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:50:02.594830 kubelet[1864]: I0513 00:50:02.594816 1864 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:50:02.594930 kubelet[1864]: I0513 00:50:02.594919 1864 reconciler.go:26] "Reconciler: start to sync state" May 13 00:50:02.595264 kubelet[1864]: W0513 00:50:02.595237 1864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:02.595332 kubelet[1864]: E0513 00:50:02.595270 1864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:02.599736 kubelet[1864]: E0513 00:50:02.599694 1864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" May 13 00:50:02.599824 kubelet[1864]: E0513 00:50:02.599797 1864 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:50:02.599000 audit[1879]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:02.599000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffece249780 a2=0 a3=7ffece24976c items=0 ppid=1864 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.599000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 13 00:50:02.600834 kubelet[1864]: E0513 00:50:02.600486 1864 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eefd4c550e380 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:50:02.588849024 +0000 UTC m=+0.366063674,LastTimestamp:2025-05-13 00:50:02.588849024 +0000 UTC m=+0.366063674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:50:02.601355 kubelet[1864]: I0513 00:50:02.601341 1864 factory.go:221] Registration of the containerd container factory successfully May 13 00:50:02.601355 kubelet[1864]: I0513 00:50:02.601353 1864 factory.go:221] Registration of the systemd container factory successfully May 13 00:50:02.601426 kubelet[1864]: I0513 00:50:02.601416 1864 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:50:02.605000 audit[1883]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:02.605000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb64c7c00 a2=0 a3=7fffb64c7bec items=0 ppid=1864 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.605000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 13 00:50:02.610000 audit[1886]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:02.610000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffaf240340 a2=0 a3=7fffaf24032c items=0 ppid=1864 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 13 00:50:02.612156 kubelet[1864]: I0513 00:50:02.612118 1864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:50:02.611000 audit[1887]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:02.611000 audit[1887]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe98a83f90 a2=0 a3=7ffe98a83f7c items=0 ppid=1864 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.611000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 13 00:50:02.613331 kubelet[1864]: I0513 00:50:02.613307 1864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:50:02.613376 kubelet[1864]: I0513 00:50:02.613341 1864 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:50:02.613376 kubelet[1864]: I0513 00:50:02.613359 1864 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:50:02.613445 kubelet[1864]: E0513 00:50:02.613404 1864 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:50:02.612000 audit[1889]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:02.612000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc81ccba0 a2=0 a3=7fffc81ccb8c items=0 ppid=1864 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 13 00:50:02.613000 audit[1890]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:02.613000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe71419cb0 a2=0 a3=7ffe71419c9c items=0 ppid=1864 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 13 00:50:02.613000 audit[1891]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:02.613000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc87948840 a2=0 a3=7ffc8794882c items=0 ppid=1864 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.614648 kubelet[1864]: W0513 00:50:02.614575 1864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:02.614648 kubelet[1864]: E0513 00:50:02.614616 1864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:02.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 13 00:50:02.614000 audit[1893]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:02.614000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8cb5db20 a2=0 a3=7ffe8cb5db0c items=0 ppid=1864 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.614000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 13 00:50:02.615000 audit[1892]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:02.615000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc089ef2b0 a2=0 a3=7ffc089ef29c items=0 ppid=1864 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.615000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 13 00:50:02.616000 audit[1896]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:02.616000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe34f0e0c0 a2=0 a3=7ffe34f0e0ac items=0 ppid=1864 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:02.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 13 00:50:02.618407 kubelet[1864]: I0513 00:50:02.618386 1864 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:50:02.618407 kubelet[1864]: I0513 00:50:02.618400 1864 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:50:02.618517 kubelet[1864]: I0513 00:50:02.618433 1864 state_mem.go:36] "Initialized new in-memory state store" May 13 00:50:02.696327 kubelet[1864]: I0513 00:50:02.696306 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:50:02.696592 kubelet[1864]: E0513 00:50:02.696564 1864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" May 13 00:50:02.713849 kubelet[1864]: E0513 00:50:02.713808 1864 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:50:02.800633 kubelet[1864]: E0513 00:50:02.800553 1864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" May 13 00:50:02.897794 kubelet[1864]: I0513 00:50:02.897756 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:50:02.898179 kubelet[1864]: E0513 00:50:02.898134 1864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" May 13 00:50:02.914215 kubelet[1864]: E0513 00:50:02.914169 1864 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:50:03.050496 kubelet[1864]: I0513 00:50:03.050465 1864 policy_none.go:49] "None policy: Start" May 13 00:50:03.051375 kubelet[1864]: I0513 00:50:03.051303 1864 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:50:03.051375 kubelet[1864]: I0513 00:50:03.051342 1864 state_mem.go:35] "Initializing new in-memory state store" May 13 00:50:03.056190 kubelet[1864]: I0513 00:50:03.056165 1864 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:50:03.059029 kernel: kauditd_printk_skb: 43 callbacks suppressed May 13 00:50:03.059060 kernel: audit: type=1400 audit(1747097403.055:214): avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:03.055000 audit[1864]: AVC avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:03.059123 kubelet[1864]: I0513 00:50:03.056220 1864 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 13 00:50:03.059123 kubelet[1864]: I0513 00:50:03.056305 1864 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:50:03.059123 kubelet[1864]: I0513 00:50:03.056398 1864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:50:03.060031 kubelet[1864]: E0513 00:50:03.060010 1864 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:50:03.055000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 00:50:03.061976 kernel: audit: type=1401 audit(1747097403.055:214): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 00:50:03.062084 kernel: audit: type=1300 audit(1747097403.055:214): arch=c000003e syscall=188 success=no exit=-22 a0=c000f55ce0 a1=c000d11320 a2=c000f55cb0 a3=25 items=0 ppid=1 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:03.055000 audit[1864]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f55ce0 a1=c000d11320 a2=c000f55cb0 a3=25 items=0 ppid=1 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:03.066330 kernel: audit: type=1327 audit(1747097403.055:214): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 00:50:03.055000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 00:50:03.201622 kubelet[1864]: E0513 00:50:03.201564 1864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" May 13 00:50:03.299733 kubelet[1864]: I0513 00:50:03.299708 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:50:03.300084 kubelet[1864]: E0513 00:50:03.300051 1864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" May 13 00:50:03.315302 kubelet[1864]: I0513 00:50:03.315194 1864 topology_manager.go:215] "Topology Admit Handler" podUID="8723c4fa09709fe179dbc822b86729d2" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:50:03.316302 kubelet[1864]: I0513 00:50:03.316273 1864 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:50:03.317051 kubelet[1864]: I0513 00:50:03.317030 1864 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:50:03.400257 kubelet[1864]: I0513 00:50:03.400225 1864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8723c4fa09709fe179dbc822b86729d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8723c4fa09709fe179dbc822b86729d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:50:03.400318 kubelet[1864]: I0513 00:50:03.400257 1864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8723c4fa09709fe179dbc822b86729d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8723c4fa09709fe179dbc822b86729d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:50:03.400318 kubelet[1864]: I0513 00:50:03.400280 1864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:03.400318 kubelet[1864]: I0513 00:50:03.400299 1864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8723c4fa09709fe179dbc822b86729d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8723c4fa09709fe179dbc822b86729d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:50:03.400401 kubelet[1864]: I0513 00:50:03.400356 1864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:03.400634 kubelet[1864]: I0513 00:50:03.400596 1864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:03.400634 kubelet[1864]: I0513 00:50:03.400615 1864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:03.400634 kubelet[1864]: I0513 00:50:03.400629 1864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:03.400811 kubelet[1864]: I0513 00:50:03.400644 1864 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:50:03.620093 kubelet[1864]: E0513 00:50:03.620020 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:03.620145 kubelet[1864]: E0513 00:50:03.620101 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:03.620746 kubelet[1864]: E0513 00:50:03.620731 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:03.620888 env[1313]: time="2025-05-13T00:50:03.620850863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:50:03.621140 env[1313]: time="2025-05-13T00:50:03.621003864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8723c4fa09709fe179dbc822b86729d2,Namespace:kube-system,Attempt:0,}" May 13 00:50:03.621327 env[1313]: time="2025-05-13T00:50:03.621302855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:50:03.627697 kubelet[1864]: W0513 00:50:03.627660 1864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:03.627697 kubelet[1864]: E0513 00:50:03.627690 1864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:03.705425 kubelet[1864]: W0513 00:50:03.705378 1864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:03.705425 kubelet[1864]: E0513 00:50:03.705419 1864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:03.946625 kubelet[1864]: W0513 00:50:03.946567 1864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:03.946706 kubelet[1864]: E0513 00:50:03.946627 1864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:04.002025 kubelet[1864]: E0513 00:50:04.001992 1864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="1.6s" May 13 00:50:04.038604 kubelet[1864]: W0513 00:50:04.038537 1864 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:04.038604 kubelet[1864]: E0513 00:50:04.038601 1864 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:04.101291 kubelet[1864]: I0513 00:50:04.101263 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:50:04.101522 kubelet[1864]: E0513 00:50:04.101489 1864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" May 13 00:50:04.398138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231007694.mount: Deactivated successfully. May 13 00:50:04.402256 env[1313]: time="2025-05-13T00:50:04.402215384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.404763 env[1313]: time="2025-05-13T00:50:04.404718645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.405605 env[1313]: time="2025-05-13T00:50:04.405579036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.407248 env[1313]: time="2025-05-13T00:50:04.407214697Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.408680 env[1313]: time="2025-05-13T00:50:04.408647468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.409696 env[1313]: time="2025-05-13T00:50:04.409672480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.410788 env[1313]: time="2025-05-13T00:50:04.410752478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.411821 env[1313]: time="2025-05-13T00:50:04.411791441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.413607 env[1313]: time="2025-05-13T00:50:04.413581547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.414663 env[1313]: time="2025-05-13T00:50:04.414633917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.415804 env[1313]: time="2025-05-13T00:50:04.415776521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.416340 env[1313]: time="2025-05-13T00:50:04.416320632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:04.432503 env[1313]: time="2025-05-13T00:50:04.432453604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:04.432503 env[1313]: time="2025-05-13T00:50:04.432491752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:04.432600 env[1313]: time="2025-05-13T00:50:04.432501369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:04.432682 env[1313]: time="2025-05-13T00:50:04.432638905Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b885b736303513eede5777f5845263a5b05c9b29d0a262aeb56411e0d5c2f9d pid=1905 runtime=io.containerd.runc.v2 May 13 00:50:04.441048 env[1313]: time="2025-05-13T00:50:04.440977793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:04.441048 env[1313]: time="2025-05-13T00:50:04.441028947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:04.441250 env[1313]: time="2025-05-13T00:50:04.441216895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:04.441523 env[1313]: time="2025-05-13T00:50:04.441487126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3ced62e4f80b2182048c9a173f0c51e517d221356cb9166d59ca7b274c86430 pid=1926 runtime=io.containerd.runc.v2 May 13 00:50:04.443010 env[1313]: time="2025-05-13T00:50:04.442965554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:04.443117 env[1313]: time="2025-05-13T00:50:04.443095029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:04.443221 env[1313]: time="2025-05-13T00:50:04.443199933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:04.443466 env[1313]: time="2025-05-13T00:50:04.443443669Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c52e57cad3a7a25bab5371d11cf9c97b35a2b31fc77809ea03c2f52106c4a231 pid=1933 runtime=io.containerd.runc.v2 May 13 00:50:04.486925 env[1313]: time="2025-05-13T00:50:04.486889813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b885b736303513eede5777f5845263a5b05c9b29d0a262aeb56411e0d5c2f9d\"" May 13 00:50:04.487914 kubelet[1864]: E0513 00:50:04.487732 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:04.490105 env[1313]: time="2025-05-13T00:50:04.490083869Z" level=info msg="CreateContainer within sandbox \"8b885b736303513eede5777f5845263a5b05c9b29d0a262aeb56411e0d5c2f9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:50:04.490376 env[1313]: time="2025-05-13T00:50:04.490191471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3ced62e4f80b2182048c9a173f0c51e517d221356cb9166d59ca7b274c86430\"" May 13 00:50:04.491049 kubelet[1864]: E0513 00:50:04.490868 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:04.497279 env[1313]: time="2025-05-13T00:50:04.497235977Z" level=info msg="CreateContainer within sandbox \"a3ced62e4f80b2182048c9a173f0c51e517d221356cb9166d59ca7b274c86430\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:50:04.497397 env[1313]: time="2025-05-13T00:50:04.497263244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8723c4fa09709fe179dbc822b86729d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c52e57cad3a7a25bab5371d11cf9c97b35a2b31fc77809ea03c2f52106c4a231\"" May 13 00:50:04.497835 kubelet[1864]: E0513 00:50:04.497820 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:04.499722 env[1313]: time="2025-05-13T00:50:04.499690522Z" level=info msg="CreateContainer within sandbox \"c52e57cad3a7a25bab5371d11cf9c97b35a2b31fc77809ea03c2f52106c4a231\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:50:04.518820 env[1313]: time="2025-05-13T00:50:04.518791794Z" level=info msg="CreateContainer within sandbox \"8b885b736303513eede5777f5845263a5b05c9b29d0a262aeb56411e0d5c2f9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"13fc2c189877a68d5bf354c50d5031998dc793c719d4cfc572a6cfa5d0e3a557\"" May 13 00:50:04.519393 env[1313]: time="2025-05-13T00:50:04.519367162Z" level=info msg="StartContainer for \"13fc2c189877a68d5bf354c50d5031998dc793c719d4cfc572a6cfa5d0e3a557\"" May 13 00:50:04.521806 env[1313]: time="2025-05-13T00:50:04.521778194Z" level=info msg="CreateContainer within sandbox \"a3ced62e4f80b2182048c9a173f0c51e517d221356cb9166d59ca7b274c86430\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2ab714161345dcb6923b43765d42ffb0dd3b07ef0ec91705f4a2c6554f73cf1e\"" May 13 00:50:04.522108 env[1313]: time="2025-05-13T00:50:04.522088167Z" level=info msg="StartContainer for \"2ab714161345dcb6923b43765d42ffb0dd3b07ef0ec91705f4a2c6554f73cf1e\"" May 13 00:50:04.526251 env[1313]: time="2025-05-13T00:50:04.526220822Z" level=info msg="CreateContainer within sandbox \"c52e57cad3a7a25bab5371d11cf9c97b35a2b31fc77809ea03c2f52106c4a231\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99ae06fa96a6e89aae11a68e3761ee5bbdbb6c99a2ed85cf05bd54cac815adb4\"" May 13 00:50:04.526536 env[1313]: time="2025-05-13T00:50:04.526515030Z" level=info msg="StartContainer for \"99ae06fa96a6e89aae11a68e3761ee5bbdbb6c99a2ed85cf05bd54cac815adb4\"" May 13 00:50:04.565330 kubelet[1864]: E0513 00:50:04.565307 1864 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.140:6443: connect: connection refused May 13 00:50:04.578744 env[1313]: time="2025-05-13T00:50:04.578708570Z" level=info msg="StartContainer for \"13fc2c189877a68d5bf354c50d5031998dc793c719d4cfc572a6cfa5d0e3a557\" returns successfully" May 13 00:50:04.579002 env[1313]: time="2025-05-13T00:50:04.578895807Z" level=info msg="StartContainer for \"2ab714161345dcb6923b43765d42ffb0dd3b07ef0ec91705f4a2c6554f73cf1e\" returns successfully" May 13 00:50:04.584322 env[1313]: time="2025-05-13T00:50:04.584284746Z" level=info msg="StartContainer for \"99ae06fa96a6e89aae11a68e3761ee5bbdbb6c99a2ed85cf05bd54cac815adb4\" returns successfully" May 13 00:50:04.619091 kubelet[1864]: E0513 00:50:04.618976 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:04.620839 kubelet[1864]: E0513 00:50:04.620785 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:04.622528 kubelet[1864]: E0513 00:50:04.622477 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:05.624057 kubelet[1864]: E0513 00:50:05.623994 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:05.702842 kubelet[1864]: I0513 00:50:05.702811 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:50:05.708509 kubelet[1864]: E0513 00:50:05.708474 1864 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:50:05.811349 kubelet[1864]: I0513 00:50:05.811312 1864 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:50:05.824815 kubelet[1864]: E0513 00:50:05.824782 1864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:50:05.924899 kubelet[1864]: E0513 00:50:05.924859 1864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:50:06.025391 kubelet[1864]: E0513 00:50:06.025355 1864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:50:06.126310 kubelet[1864]: E0513 00:50:06.126272 1864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:50:06.227047 kubelet[1864]: E0513 00:50:06.226917 1864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:50:06.327461 kubelet[1864]: E0513 00:50:06.327426 1864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:50:06.428088 kubelet[1864]: E0513 00:50:06.428057 1864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:50:06.528683 kubelet[1864]: E0513 00:50:06.528564 1864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:50:07.577356 kubelet[1864]: I0513 00:50:07.577311 1864 apiserver.go:52] "Watching apiserver" May 13 00:50:07.595048 kubelet[1864]: I0513 00:50:07.594998 1864 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:50:07.653375 systemd[1]: Reloading. May 13 00:50:07.711321 /usr/lib/systemd/system-generators/torcx-generator[2158]: time="2025-05-13T00:50:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:50:07.711347 /usr/lib/systemd/system-generators/torcx-generator[2158]: time="2025-05-13T00:50:07Z" level=info msg="torcx already run" May 13 00:50:07.784062 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:50:07.784079 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:50:07.800536 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:50:07.876087 systemd[1]: Stopping kubelet.service... May 13 00:50:07.895293 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:50:07.895592 systemd[1]: Stopped kubelet.service. May 13 00:50:07.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:07.897181 systemd[1]: Starting kubelet.service... May 13 00:50:07.898979 kernel: audit: type=1131 audit(1747097407.894:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:07.980378 systemd[1]: Started kubelet.service. May 13 00:50:07.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:07.984965 kernel: audit: type=1130 audit(1747097407.979:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:08.016040 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:50:08.016403 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:50:08.016403 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:50:08.016528 kubelet[2213]: I0513 00:50:08.016439 2213 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:50:08.020542 kubelet[2213]: I0513 00:50:08.020508 2213 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:50:08.020542 kubelet[2213]: I0513 00:50:08.020537 2213 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:50:08.020788 kubelet[2213]: I0513 00:50:08.020766 2213 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:50:08.021969 kubelet[2213]: I0513 00:50:08.021933 2213 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:50:08.023117 kubelet[2213]: I0513 00:50:08.022894 2213 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:50:08.028770 kubelet[2213]: I0513 00:50:08.028753 2213 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:50:08.029374 kubelet[2213]: I0513 00:50:08.029344 2213 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:50:08.029889 kubelet[2213]: I0513 00:50:08.029442 2213 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:50:08.030016 kubelet[2213]: I0513 00:50:08.029900 2213 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:50:08.030016 kubelet[2213]: I0513 00:50:08.029910 2213 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:50:08.030016 kubelet[2213]: I0513 00:50:08.029959 2213 state_mem.go:36] "Initialized new in-memory state store" May 13 00:50:08.030089 kubelet[2213]: I0513 00:50:08.030039 2213 kubelet.go:400] "Attempting to sync node with API server" May 13 00:50:08.030089 kubelet[2213]: I0513 00:50:08.030050 2213 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:50:08.030089 kubelet[2213]: I0513 00:50:08.030066 2213 kubelet.go:312] "Adding apiserver pod source" May 13 00:50:08.030089 kubelet[2213]: I0513 00:50:08.030077 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:50:08.030490 kubelet[2213]: I0513 00:50:08.030475 2213 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:50:08.030723 kubelet[2213]: I0513 00:50:08.030708 2213 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:50:08.031149 kubelet[2213]: I0513 00:50:08.031138 2213 server.go:1264] "Started kubelet" May 13 00:50:08.042373 kernel: audit: type=1400 audit(1747097408.031:217): avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:08.042431 kernel: audit: type=1401 audit(1747097408.031:217): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 00:50:08.042447 kernel: audit: type=1300 audit(1747097408.031:217): arch=c000003e syscall=188 success=no exit=-22 a0=c00039cdb0 a1=c000a1a7c8 a2=c00039cc90 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:08.031000 audit[2213]: AVC avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:08.031000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 00:50:08.031000 audit[2213]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00039cdb0 a1=c000a1a7c8 a2=c00039cc90 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.032617 2213 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.032652 2213 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.032669 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:50:08.042597 kubelet[2213]: E0513 00:50:08.039827 2213 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.039931 2213 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.039979 2213 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.040085 2213 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.040175 2213 reconciler.go:26] "Reconciler: start to sync state" May 13 00:50:08.042597 kubelet[2213]: E0513 00:50:08.040385 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.040580 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.040746 2213 server.go:455] "Adding debug handlers to kubelet server" May 13 00:50:08.042597 kubelet[2213]: I0513 00:50:08.042166 2213 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:50:08.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 00:50:08.051191 kubelet[2213]: I0513 00:50:08.049449 2213 factory.go:221] Registration of the containerd container factory successfully May 13 00:50:08.051191 kubelet[2213]: I0513 00:50:08.049463 2213 factory.go:221] Registration of the systemd container factory successfully May 13 00:50:08.051191 kubelet[2213]: I0513 00:50:08.049577 2213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:50:08.031000 audit[2213]: AVC avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:08.031000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 00:50:08.031000 audit[2213]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009166c0 a1=c000a1a7e0 a2=c00039d5f0 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:08.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 00:50:08.051966 kernel: audit: type=1327 audit(1747097408.031:217): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 00:50:08.054171 kubelet[2213]: I0513 00:50:08.054144 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:50:08.054854 kubelet[2213]: I0513 00:50:08.054836 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:50:08.054907 kubelet[2213]: I0513 00:50:08.054858 2213 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:50:08.054907 kubelet[2213]: I0513 00:50:08.054872 2213 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:50:08.054907 kubelet[2213]: E0513 00:50:08.054903 2213 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:50:08.092452 kubelet[2213]: I0513 00:50:08.092421 2213 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:50:08.092452 kubelet[2213]: I0513 00:50:08.092439 2213 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:50:08.092452 kubelet[2213]: I0513 00:50:08.092455 2213 state_mem.go:36] "Initialized new in-memory state store" May 13 00:50:08.092648 kubelet[2213]: I0513 00:50:08.092575 2213 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:50:08.092648 kubelet[2213]: I0513 00:50:08.092585 2213 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:50:08.092648 kubelet[2213]: I0513 00:50:08.092602 2213 policy_none.go:49] "None policy: Start" May 13 00:50:08.093191 kubelet[2213]: I0513 00:50:08.093171 2213 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:50:08.093237 kubelet[2213]: I0513 00:50:08.093197 2213 state_mem.go:35] "Initializing new in-memory state store" May 13 00:50:08.093379 kubelet[2213]: I0513 00:50:08.093357 2213 state_mem.go:75] "Updated machine memory state" May 13 00:50:08.094427 kubelet[2213]: I0513 00:50:08.094395 2213 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:50:08.093000 audit[2213]: AVC avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:08.094625 kubelet[2213]: I0513 00:50:08.094453 2213 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 13 00:50:08.094625 kubelet[2213]: I0513 00:50:08.094594 2213 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:50:08.094727 kubelet[2213]: I0513 00:50:08.094709 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:50:08.095245 kernel: kauditd_printk_skb: 4 callbacks suppressed May 13 00:50:08.095341 kernel: audit: type=1400 audit(1747097408.093:219): avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:08.093000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 00:50:08.108281 kernel: audit: type=1401 audit(1747097408.093:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 00:50:08.108339 kernel: audit: type=1300 audit(1747097408.093:219): arch=c000003e syscall=188 success=no exit=-22 a0=c0011f0240 a1=c00117fa70 a2=c0011f0210 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:08.108355 kernel: audit: type=1327 audit(1747097408.093:219): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 00:50:08.093000 audit[2213]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011f0240 a1=c00117fa70 a2=c0011f0210 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:08.093000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 00:50:08.143467 kubelet[2213]: I0513 00:50:08.143388 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:50:08.148062 kubelet[2213]: I0513 00:50:08.148038 2213 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:50:08.148132 kubelet[2213]: I0513 00:50:08.148085 2213 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:50:08.155017 kubelet[2213]: I0513 00:50:08.154989 2213 topology_manager.go:215] "Topology Admit Handler" podUID="8723c4fa09709fe179dbc822b86729d2" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:50:08.155189 kubelet[2213]: I0513 00:50:08.155162 2213 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:50:08.155301 kubelet[2213]: I0513 00:50:08.155220 2213 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:50:08.241641 kubelet[2213]: I0513 00:50:08.241611 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8723c4fa09709fe179dbc822b86729d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8723c4fa09709fe179dbc822b86729d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:50:08.241641 kubelet[2213]: I0513 00:50:08.241646 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8723c4fa09709fe179dbc822b86729d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8723c4fa09709fe179dbc822b86729d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:50:08.241739 kubelet[2213]: I0513 00:50:08.241664 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:08.241739 kubelet[2213]: I0513 00:50:08.241681 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:08.241739 kubelet[2213]: I0513 00:50:08.241697 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:50:08.241739 kubelet[2213]: I0513 00:50:08.241714 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8723c4fa09709fe179dbc822b86729d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8723c4fa09709fe179dbc822b86729d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:50:08.241855 kubelet[2213]: I0513 00:50:08.241768 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:08.241855 kubelet[2213]: I0513 00:50:08.241805 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:08.241855 kubelet[2213]: I0513 00:50:08.241823 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:50:08.465559 kubelet[2213]: E0513 00:50:08.465533 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:08.465939 kubelet[2213]: E0513 00:50:08.465902 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:08.466033 kubelet[2213]: E0513 00:50:08.466004 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:09.030515 kubelet[2213]: I0513 00:50:09.030484 2213 apiserver.go:52] "Watching apiserver" May 13 00:50:09.040940 kubelet[2213]: I0513 00:50:09.040898 2213 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:50:09.068155 kubelet[2213]: E0513 00:50:09.066564 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:09.068155 kubelet[2213]: E0513 00:50:09.067002 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:09.070489 kubelet[2213]: E0513 00:50:09.070430 2213 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:50:09.070764 kubelet[2213]: E0513 00:50:09.070743 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:09.080498 kubelet[2213]: I0513 00:50:09.080434 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.080417107 podStartE2EDuration="1.080417107s" podCreationTimestamp="2025-05-13 00:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:50:09.080269266 +0000 UTC m=+1.096407879" watchObservedRunningTime="2025-05-13 00:50:09.080417107 +0000 UTC m=+1.096555720" May 13 00:50:09.086694 kubelet[2213]: I0513 00:50:09.086664 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.086658408 podStartE2EDuration="1.086658408s" podCreationTimestamp="2025-05-13 00:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:50:09.086644802 +0000 UTC m=+1.102783415" watchObservedRunningTime="2025-05-13 00:50:09.086658408 +0000 UTC m=+1.102797021" May 13 00:50:09.092245 kubelet[2213]: I0513 00:50:09.092179 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.092163211 podStartE2EDuration="1.092163211s" podCreationTimestamp="2025-05-13 00:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:50:09.092162689 +0000 UTC m=+1.108301312" watchObservedRunningTime="2025-05-13 00:50:09.092163211 +0000 UTC m=+1.108301824" May 13 00:50:10.068199 kubelet[2213]: E0513 00:50:10.068168 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:11.069798 kubelet[2213]: E0513 00:50:11.069758 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:12.806891 sudo[1479]: pam_unix(sudo:session): session closed for user root May 13 00:50:12.804000 audit[1479]: USER_END pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:50:12.808429 sshd[1474]: pam_unix(sshd:session): session closed for user core May 13 00:50:12.813906 kernel: audit: type=1106 audit(1747097412.804:220): pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:50:12.814050 kernel: audit: type=1104 audit(1747097412.805:221): pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:50:12.805000 audit[1479]: CRED_DISP pid=1479 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 00:50:12.811162 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:46196.service: Deactivated successfully. May 13 00:50:12.812051 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:50:12.812090 systemd-logind[1296]: Session 7 logged out. Waiting for processes to exit. May 13 00:50:12.812816 systemd-logind[1296]: Removed session 7. May 13 00:50:12.807000 audit[1474]: USER_END pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:12.807000 audit[1474]: CRED_DISP pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:12.822089 kernel: audit: type=1106 audit(1747097412.807:222): pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:12.822143 kernel: audit: type=1104 audit(1747097412.807:223): pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:12.822160 kernel: audit: type=1131 audit(1747097412.809:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.140:22-10.0.0.1:46196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:12.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.140:22-10.0.0.1:46196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:17.487225 kubelet[2213]: E0513 00:50:17.487187 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:17.968497 kubelet[2213]: E0513 00:50:17.968467 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:18.076903 kubelet[2213]: E0513 00:50:18.076701 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:18.076903 kubelet[2213]: E0513 00:50:18.076827 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:20.288540 kubelet[2213]: E0513 00:50:20.288511 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:21.076160 update_engine[1298]: I0513 00:50:21.076100 1298 update_attempter.cc:509] Updating boot flags... May 13 00:50:22.491103 kubelet[2213]: I0513 00:50:22.491061 2213 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:50:22.491539 env[1313]: time="2025-05-13T00:50:22.491500008Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:50:22.491750 kubelet[2213]: I0513 00:50:22.491675 2213 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:50:23.435603 kubelet[2213]: I0513 00:50:23.435549 2213 topology_manager.go:215] "Topology Admit Handler" podUID="c4639f73-87ed-4146-ac03-9ff37799be28" podNamespace="kube-system" podName="kube-proxy-4rxhr" May 13 00:50:23.529573 kubelet[2213]: I0513 00:50:23.529542 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c4639f73-87ed-4146-ac03-9ff37799be28-kube-proxy\") pod \"kube-proxy-4rxhr\" (UID: \"c4639f73-87ed-4146-ac03-9ff37799be28\") " pod="kube-system/kube-proxy-4rxhr" May 13 00:50:23.530010 kubelet[2213]: I0513 00:50:23.529991 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4639f73-87ed-4146-ac03-9ff37799be28-xtables-lock\") pod \"kube-proxy-4rxhr\" (UID: \"c4639f73-87ed-4146-ac03-9ff37799be28\") " pod="kube-system/kube-proxy-4rxhr" May 13 00:50:23.530124 kubelet[2213]: I0513 00:50:23.530098 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4639f73-87ed-4146-ac03-9ff37799be28-lib-modules\") pod \"kube-proxy-4rxhr\" (UID: \"c4639f73-87ed-4146-ac03-9ff37799be28\") " pod="kube-system/kube-proxy-4rxhr" May 13 00:50:23.530124 kubelet[2213]: I0513 00:50:23.530124 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjlf8\" (UniqueName: \"kubernetes.io/projected/c4639f73-87ed-4146-ac03-9ff37799be28-kube-api-access-cjlf8\") pod \"kube-proxy-4rxhr\" (UID: \"c4639f73-87ed-4146-ac03-9ff37799be28\") " pod="kube-system/kube-proxy-4rxhr" May 13 00:50:23.545426 kubelet[2213]: I0513 00:50:23.545375 2213 topology_manager.go:215] "Topology Admit Handler" podUID="b7fe0c72-f75b-4501-9c11-84376f210fd8" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-h98hs" May 13 00:50:23.630726 kubelet[2213]: I0513 00:50:23.630688 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvmgd\" (UniqueName: \"kubernetes.io/projected/b7fe0c72-f75b-4501-9c11-84376f210fd8-kube-api-access-rvmgd\") pod \"tigera-operator-797db67f8-h98hs\" (UID: \"b7fe0c72-f75b-4501-9c11-84376f210fd8\") " pod="tigera-operator/tigera-operator-797db67f8-h98hs" May 13 00:50:23.630831 kubelet[2213]: I0513 00:50:23.630738 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b7fe0c72-f75b-4501-9c11-84376f210fd8-var-lib-calico\") pod \"tigera-operator-797db67f8-h98hs\" (UID: \"b7fe0c72-f75b-4501-9c11-84376f210fd8\") " pod="tigera-operator/tigera-operator-797db67f8-h98hs" May 13 00:50:23.738756 kubelet[2213]: E0513 00:50:23.738365 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:23.740136 env[1313]: time="2025-05-13T00:50:23.740077372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4rxhr,Uid:c4639f73-87ed-4146-ac03-9ff37799be28,Namespace:kube-system,Attempt:0,}" May 13 00:50:23.755293 env[1313]: time="2025-05-13T00:50:23.755205813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:23.755293 env[1313]: time="2025-05-13T00:50:23.755267853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:23.755293 env[1313]: time="2025-05-13T00:50:23.755279689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:23.755722 env[1313]: time="2025-05-13T00:50:23.755647283Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6464859a7b136998cd3ca83ec04ab4605666da5177d4e7ee9953235a286a042e pid=2325 runtime=io.containerd.runc.v2 May 13 00:50:23.785258 env[1313]: time="2025-05-13T00:50:23.784751703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4rxhr,Uid:c4639f73-87ed-4146-ac03-9ff37799be28,Namespace:kube-system,Attempt:0,} returns sandbox id \"6464859a7b136998cd3ca83ec04ab4605666da5177d4e7ee9953235a286a042e\"" May 13 00:50:23.785393 kubelet[2213]: E0513 00:50:23.785292 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:23.786890 env[1313]: time="2025-05-13T00:50:23.786841247Z" level=info msg="CreateContainer within sandbox \"6464859a7b136998cd3ca83ec04ab4605666da5177d4e7ee9953235a286a042e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:50:23.803742 env[1313]: time="2025-05-13T00:50:23.803701671Z" level=info msg="CreateContainer within sandbox \"6464859a7b136998cd3ca83ec04ab4605666da5177d4e7ee9953235a286a042e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a925f3ebfac5bcc8122c31b20112b0061140cb1235a14173193609dba1585673\"" May 13 00:50:23.804274 env[1313]: time="2025-05-13T00:50:23.804220184Z" level=info msg="StartContainer for \"a925f3ebfac5bcc8122c31b20112b0061140cb1235a14173193609dba1585673\"" May 13 00:50:23.847985 env[1313]: time="2025-05-13T00:50:23.846325677Z" level=info msg="StartContainer for \"a925f3ebfac5bcc8122c31b20112b0061140cb1235a14173193609dba1585673\" returns successfully" May 13 00:50:23.849375 env[1313]: time="2025-05-13T00:50:23.849339442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-h98hs,Uid:b7fe0c72-f75b-4501-9c11-84376f210fd8,Namespace:tigera-operator,Attempt:0,}" May 13 00:50:23.863401 env[1313]: time="2025-05-13T00:50:23.863221269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:23.863401 env[1313]: time="2025-05-13T00:50:23.863263294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:23.863401 env[1313]: time="2025-05-13T00:50:23.863273557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:23.863746 env[1313]: time="2025-05-13T00:50:23.863610322Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e194c72424f5dbd1e53eebe3eb75a00e5256875c5837bdbf485a26df42707e1 pid=2400 runtime=io.containerd.runc.v2 May 13 00:50:23.901000 audit[2455]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:23.901000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff206af570 a2=0 a3=7fff206af55c items=0 ppid=2377 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:23.908320 env[1313]: time="2025-05-13T00:50:23.908268677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-h98hs,Uid:b7fe0c72-f75b-4501-9c11-84376f210fd8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9e194c72424f5dbd1e53eebe3eb75a00e5256875c5837bdbf485a26df42707e1\"" May 13 00:50:23.909219 kernel: audit: type=1325 audit(1747097423.901:225): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:23.909275 kernel: audit: type=1300 audit(1747097423.901:225): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff206af570 a2=0 a3=7fff206af55c items=0 ppid=2377 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:23.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 13 00:50:23.911167 env[1313]: time="2025-05-13T00:50:23.911141707Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 00:50:23.902000 audit[2454]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:23.918964 kernel: audit: type=1327 audit(1747097423.901:225): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 13 00:50:23.919005 kernel: audit: type=1325 audit(1747097423.902:226): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:23.919022 kernel: audit: type=1300 audit(1747097423.902:226): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaafa1400 a2=0 a3=7ffdaafa13ec items=0 ppid=2377 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:23.902000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaafa1400 a2=0 a3=7ffdaafa13ec items=0 ppid=2377 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:23.902000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 13 00:50:23.922294 kernel: audit: type=1327 audit(1747097423.902:226): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 13 00:50:23.902000 audit[2459]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:23.924573 kernel: audit: type=1325 audit(1747097423.902:227): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:23.902000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff497aea80 a2=0 a3=7fff497aea6c items=0 ppid=2377 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:23.929129 kernel: audit: type=1300 audit(1747097423.902:227): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff497aea80 a2=0 a3=7fff497aea6c items=0 ppid=2377 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:23.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 13 00:50:23.931475 kernel: audit: type=1327 audit(1747097423.902:227): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 13 00:50:23.906000 audit[2464]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:23.933708 kernel: audit: type=1325 audit(1747097423.906:228): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:23.906000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffbfd798a0 a2=0 a3=7fffbfd7988c items=0 ppid=2377 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:23.906000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 13 00:50:23.907000 audit[2463]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:23.907000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd792e3af0 a2=0 a3=7ffd792e3adc items=0 ppid=2377 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:23.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 13 00:50:23.909000 audit[2465]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:23.909000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff82ee48e0 a2=0 a3=7fff82ee48cc items=0 ppid=2377 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:23.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 13 00:50:24.002000 audit[2466]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.002000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff48159e40 a2=0 a3=7fff48159e2c items=0 ppid=2377 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 13 00:50:24.004000 audit[2468]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.004000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcb34aebf0 a2=0 a3=7ffcb34aebdc items=0 ppid=2377 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 13 00:50:24.007000 audit[2471]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.007000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffee7ad9b80 a2=0 a3=7ffee7ad9b6c items=0 ppid=2377 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 13 00:50:24.008000 audit[2472]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.008000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd426b9a30 a2=0 a3=7ffd426b9a1c items=0 ppid=2377 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 13 00:50:24.010000 audit[2474]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.010000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1f8e8170 a2=0 a3=7ffc1f8e815c items=0 ppid=2377 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 13 00:50:24.010000 audit[2475]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.010000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1c988d90 a2=0 a3=7ffe1c988d7c items=0 ppid=2377 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 13 00:50:24.012000 audit[2477]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.012000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff313a8410 a2=0 a3=7fff313a83fc items=0 ppid=2377 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 13 00:50:24.015000 audit[2480]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.015000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc39e586a0 a2=0 a3=7ffc39e5868c items=0 ppid=2377 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 13 00:50:24.016000 audit[2481]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.016000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe65cedd10 a2=0 a3=7ffe65cedcfc items=0 ppid=2377 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.016000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 13 00:50:24.018000 audit[2483]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.018000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa905cd60 a2=0 a3=7fffa905cd4c items=0 ppid=2377 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 13 00:50:24.019000 audit[2484]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.019000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefa48a100 a2=0 a3=7ffefa48a0ec items=0 ppid=2377 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 13 00:50:24.021000 audit[2486]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.021000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdae5ad8d0 a2=0 a3=7ffdae5ad8bc items=0 ppid=2377 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.021000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 13 00:50:24.023000 audit[2489]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.023000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffceb72c130 a2=0 a3=7ffceb72c11c items=0 ppid=2377 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.023000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 13 00:50:24.026000 audit[2492]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.026000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc33ca900 a2=0 a3=7ffcc33ca8ec items=0 ppid=2377 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 13 00:50:24.027000 audit[2493]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.027000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe41d80ec0 a2=0 a3=7ffe41d80eac items=0 ppid=2377 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 13 00:50:24.029000 audit[2495]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.029000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc43493ef0 a2=0 a3=7ffc43493edc items=0 ppid=2377 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 13 00:50:24.031000 audit[2498]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.031000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2804b6e0 a2=0 a3=7ffd2804b6cc items=0 ppid=2377 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 13 00:50:24.032000 audit[2499]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.032000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc82b2a030 a2=0 a3=7ffc82b2a01c items=0 ppid=2377 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 13 00:50:24.034000 audit[2501]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 00:50:24.034000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffb0ac1120 a2=0 a3=7fffb0ac110c items=0 ppid=2377 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 13 00:50:24.051000 audit[2507]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:24.051000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff1cc86d10 a2=0 a3=7fff1cc86cfc items=0 ppid=2377 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.051000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:24.061000 audit[2507]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:24.061000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff1cc86d10 a2=0 a3=7fff1cc86cfc items=0 ppid=2377 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:24.063000 audit[2511]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.063000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffff90ba930 a2=0 a3=7ffff90ba91c items=0 ppid=2377 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 13 00:50:24.065000 audit[2513]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.065000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcf95e9a50 a2=0 a3=7ffcf95e9a3c items=0 ppid=2377 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 13 00:50:24.067000 audit[2516]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.067000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd2e602e30 a2=0 a3=7ffd2e602e1c items=0 ppid=2377 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 13 00:50:24.068000 audit[2517]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.068000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe99d79fd0 a2=0 a3=7ffe99d79fbc items=0 ppid=2377 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 13 00:50:24.070000 audit[2519]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.070000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffce23612d0 a2=0 a3=7ffce23612bc items=0 ppid=2377 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 13 00:50:24.071000 audit[2520]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.071000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff82a145a0 a2=0 a3=7fff82a1458c items=0 ppid=2377 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 13 00:50:24.073000 audit[2522]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.073000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd339ff1e0 a2=0 a3=7ffd339ff1cc items=0 ppid=2377 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 13 00:50:24.076000 audit[2525]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.076000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffedc38ef50 a2=0 a3=7ffedc38ef3c items=0 ppid=2377 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 13 00:50:24.077000 audit[2526]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.077000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd0762ac0 a2=0 a3=7ffcd0762aac items=0 ppid=2377 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 13 00:50:24.078000 audit[2528]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.078000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff9728e5c0 a2=0 a3=7fff9728e5ac items=0 ppid=2377 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 13 00:50:24.079000 audit[2529]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.079000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0eab17f0 a2=0 a3=7ffe0eab17dc items=0 ppid=2377 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 13 00:50:24.081000 audit[2531]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.081000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd35a3d8a0 a2=0 a3=7ffd35a3d88c items=0 ppid=2377 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 13 00:50:24.085009 kubelet[2213]: E0513 00:50:24.084987 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:24.085000 audit[2534]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.085000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4beee690 a2=0 a3=7ffc4beee67c items=0 ppid=2377 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 13 00:50:24.088000 audit[2537]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.088000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2c9a79e0 a2=0 a3=7fff2c9a79cc items=0 ppid=2377 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 13 00:50:24.088000 audit[2538]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.088000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd71a02ec0 a2=0 a3=7ffd71a02eac items=0 ppid=2377 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 13 00:50:24.090000 audit[2540]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.091909 kubelet[2213]: I0513 00:50:24.091861 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4rxhr" podStartSLOduration=1.091847799 podStartE2EDuration="1.091847799s" podCreationTimestamp="2025-05-13 00:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:50:24.091669593 +0000 UTC m=+16.107808206" watchObservedRunningTime="2025-05-13 00:50:24.091847799 +0000 UTC m=+16.107986412" May 13 00:50:24.090000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd71ef0870 a2=0 a3=7ffd71ef085c items=0 ppid=2377 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 13 00:50:24.094000 audit[2543]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.094000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe3168e910 a2=0 a3=7ffe3168e8fc items=0 ppid=2377 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.094000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 13 00:50:24.095000 audit[2544]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.095000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe8923ba0 a2=0 a3=7fffe8923b8c items=0 ppid=2377 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 13 00:50:24.097000 audit[2546]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.097000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe7f7042c0 a2=0 a3=7ffe7f7042ac items=0 ppid=2377 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 13 00:50:24.097000 audit[2547]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.097000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf78664d0 a2=0 a3=7ffdf78664bc items=0 ppid=2377 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 13 00:50:24.099000 audit[2549]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.099000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc8aff3020 a2=0 a3=7ffc8aff300c items=0 ppid=2377 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 13 00:50:24.102000 audit[2552]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 00:50:24.102000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc95782bf0 a2=0 a3=7ffc95782bdc items=0 ppid=2377 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 13 00:50:24.104000 audit[2554]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 13 00:50:24.104000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffd4f672be0 a2=0 a3=7ffd4f672bcc items=0 ppid=2377 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.104000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:24.104000 audit[2554]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 13 00:50:24.104000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd4f672be0 a2=0 a3=7ffd4f672bcc items=0 ppid=2377 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:24.104000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:25.319856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611977666.mount: Deactivated successfully. May 13 00:50:26.272355 env[1313]: time="2025-05-13T00:50:26.272301306Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:26.274059 env[1313]: time="2025-05-13T00:50:26.274009642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:26.275496 env[1313]: time="2025-05-13T00:50:26.275452457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:26.276785 env[1313]: time="2025-05-13T00:50:26.276747207Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:26.277308 env[1313]: time="2025-05-13T00:50:26.277283883Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 00:50:26.279207 env[1313]: time="2025-05-13T00:50:26.279167304Z" level=info msg="CreateContainer within sandbox \"9e194c72424f5dbd1e53eebe3eb75a00e5256875c5837bdbf485a26df42707e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 00:50:26.289849 env[1313]: time="2025-05-13T00:50:26.289812330Z" level=info msg="CreateContainer within sandbox \"9e194c72424f5dbd1e53eebe3eb75a00e5256875c5837bdbf485a26df42707e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8081a27298303b92d406b9c405fdeae1cc14b57706cbafc44121056cee9a9320\"" May 13 00:50:26.291267 env[1313]: time="2025-05-13T00:50:26.290185077Z" level=info msg="StartContainer for \"8081a27298303b92d406b9c405fdeae1cc14b57706cbafc44121056cee9a9320\"" May 13 00:50:26.331510 env[1313]: time="2025-05-13T00:50:26.331454527Z" level=info msg="StartContainer for \"8081a27298303b92d406b9c405fdeae1cc14b57706cbafc44121056cee9a9320\" returns successfully" May 13 00:50:27.098511 kubelet[2213]: I0513 00:50:27.098458 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-h98hs" podStartSLOduration=1.729872148 podStartE2EDuration="4.098435744s" podCreationTimestamp="2025-05-13 00:50:23 +0000 UTC" firstStartedPulling="2025-05-13 00:50:23.909497883 +0000 UTC m=+15.925636496" lastFinishedPulling="2025-05-13 00:50:26.278061479 +0000 UTC m=+18.294200092" observedRunningTime="2025-05-13 00:50:27.098294235 +0000 UTC m=+19.114432849" watchObservedRunningTime="2025-05-13 00:50:27.098435744 +0000 UTC m=+19.114574357" May 13 00:50:28.981000 audit[2594]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:28.984230 kernel: kauditd_printk_skb: 143 callbacks suppressed May 13 00:50:28.984378 kernel: audit: type=1325 audit(1747097428.981:276): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:28.981000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffc5e608e0 a2=0 a3=7fffc5e608cc items=0 ppid=2377 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:28.991696 kernel: audit: type=1300 audit(1747097428.981:276): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffc5e608e0 a2=0 a3=7fffc5e608cc items=0 ppid=2377 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:28.991752 kernel: audit: type=1327 audit(1747097428.981:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:28.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:28.994000 audit[2594]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:28.994000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffc5e608e0 a2=0 a3=0 items=0 ppid=2377 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:29.003505 kernel: audit: type=1325 audit(1747097428.994:277): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:29.003576 kernel: audit: type=1300 audit(1747097428.994:277): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffc5e608e0 a2=0 a3=0 items=0 ppid=2377 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:29.003607 kernel: audit: type=1327 audit(1747097428.994:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:28.994000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:29.007000 audit[2596]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:29.007000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc4db47110 a2=0 a3=7ffc4db470fc items=0 ppid=2377 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:29.016467 kernel: audit: type=1325 audit(1747097429.007:278): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:29.016604 kernel: audit: type=1300 audit(1747097429.007:278): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc4db47110 a2=0 a3=7ffc4db470fc items=0 ppid=2377 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:29.016623 kernel: audit: type=1327 audit(1747097429.007:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:29.007000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:29.020000 audit[2596]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:29.020000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc4db47110 a2=0 a3=0 items=0 ppid=2377 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:29.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:29.024969 kernel: audit: type=1325 audit(1747097429.020:279): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:29.102788 kubelet[2213]: I0513 00:50:29.102743 2213 topology_manager.go:215] "Topology Admit Handler" podUID="9d9833bf-3cb8-47aa-81c8-491e0b913766" podNamespace="calico-system" podName="calico-typha-54799565d8-8gz79" May 13 00:50:29.148646 kubelet[2213]: I0513 00:50:29.148598 2213 topology_manager.go:215] "Topology Admit Handler" podUID="adac383f-9826-49d1-9670-5eb2ec1f5314" podNamespace="calico-system" podName="calico-node-dbfpr" May 13 00:50:29.162270 kubelet[2213]: I0513 00:50:29.162229 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adac383f-9826-49d1-9670-5eb2ec1f5314-lib-modules\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162270 kubelet[2213]: I0513 00:50:29.162263 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/adac383f-9826-49d1-9670-5eb2ec1f5314-node-certs\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162450 kubelet[2213]: I0513 00:50:29.162282 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d9833bf-3cb8-47aa-81c8-491e0b913766-tigera-ca-bundle\") pod \"calico-typha-54799565d8-8gz79\" (UID: \"9d9833bf-3cb8-47aa-81c8-491e0b913766\") " pod="calico-system/calico-typha-54799565d8-8gz79" May 13 00:50:29.162450 kubelet[2213]: I0513 00:50:29.162299 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9d9833bf-3cb8-47aa-81c8-491e0b913766-typha-certs\") pod \"calico-typha-54799565d8-8gz79\" (UID: \"9d9833bf-3cb8-47aa-81c8-491e0b913766\") " pod="calico-system/calico-typha-54799565d8-8gz79" May 13 00:50:29.162450 kubelet[2213]: I0513 00:50:29.162314 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/adac383f-9826-49d1-9670-5eb2ec1f5314-cni-log-dir\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162450 kubelet[2213]: I0513 00:50:29.162328 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45d2\" (UniqueName: \"kubernetes.io/projected/9d9833bf-3cb8-47aa-81c8-491e0b913766-kube-api-access-t45d2\") pod \"calico-typha-54799565d8-8gz79\" (UID: \"9d9833bf-3cb8-47aa-81c8-491e0b913766\") " pod="calico-system/calico-typha-54799565d8-8gz79" May 13 00:50:29.162450 kubelet[2213]: I0513 00:50:29.162342 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/adac383f-9826-49d1-9670-5eb2ec1f5314-var-run-calico\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162595 kubelet[2213]: I0513 00:50:29.162367 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/adac383f-9826-49d1-9670-5eb2ec1f5314-var-lib-calico\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162595 kubelet[2213]: I0513 00:50:29.162382 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/adac383f-9826-49d1-9670-5eb2ec1f5314-cni-net-dir\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162595 kubelet[2213]: I0513 00:50:29.162395 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adac383f-9826-49d1-9670-5eb2ec1f5314-tigera-ca-bundle\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162595 kubelet[2213]: I0513 00:50:29.162413 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/adac383f-9826-49d1-9670-5eb2ec1f5314-cni-bin-dir\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162595 kubelet[2213]: I0513 00:50:29.162426 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adac383f-9826-49d1-9670-5eb2ec1f5314-xtables-lock\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162708 kubelet[2213]: I0513 00:50:29.162439 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/adac383f-9826-49d1-9670-5eb2ec1f5314-policysync\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162708 kubelet[2213]: I0513 00:50:29.162452 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/adac383f-9826-49d1-9670-5eb2ec1f5314-flexvol-driver-host\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.162708 kubelet[2213]: I0513 00:50:29.162469 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqz9j\" (UniqueName: \"kubernetes.io/projected/adac383f-9826-49d1-9670-5eb2ec1f5314-kube-api-access-zqz9j\") pod \"calico-node-dbfpr\" (UID: \"adac383f-9826-49d1-9670-5eb2ec1f5314\") " pod="calico-system/calico-node-dbfpr" May 13 00:50:29.254031 kubelet[2213]: I0513 00:50:29.253902 2213 topology_manager.go:215] "Topology Admit Handler" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" podNamespace="calico-system" podName="csi-node-driver-dbllw" May 13 00:50:29.254276 kubelet[2213]: E0513 00:50:29.254159 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:29.263748 kubelet[2213]: I0513 00:50:29.263687 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e52f16f-64af-4b4e-a240-a749e7055c20-kubelet-dir\") pod \"csi-node-driver-dbllw\" (UID: \"8e52f16f-64af-4b4e-a240-a749e7055c20\") " pod="calico-system/csi-node-driver-dbllw" May 13 00:50:29.263851 kubelet[2213]: I0513 00:50:29.263769 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8e52f16f-64af-4b4e-a240-a749e7055c20-socket-dir\") pod \"csi-node-driver-dbllw\" (UID: \"8e52f16f-64af-4b4e-a240-a749e7055c20\") " pod="calico-system/csi-node-driver-dbllw" May 13 00:50:29.263851 kubelet[2213]: I0513 00:50:29.263785 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4hp\" (UniqueName: \"kubernetes.io/projected/8e52f16f-64af-4b4e-a240-a749e7055c20-kube-api-access-8q4hp\") pod \"csi-node-driver-dbllw\" (UID: \"8e52f16f-64af-4b4e-a240-a749e7055c20\") " pod="calico-system/csi-node-driver-dbllw" May 13 00:50:29.263851 kubelet[2213]: I0513 00:50:29.263798 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8e52f16f-64af-4b4e-a240-a749e7055c20-registration-dir\") pod \"csi-node-driver-dbllw\" (UID: \"8e52f16f-64af-4b4e-a240-a749e7055c20\") " pod="calico-system/csi-node-driver-dbllw" May 13 00:50:29.263851 kubelet[2213]: I0513 00:50:29.263845 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8e52f16f-64af-4b4e-a240-a749e7055c20-varrun\") pod \"csi-node-driver-dbllw\" (UID: \"8e52f16f-64af-4b4e-a240-a749e7055c20\") " pod="calico-system/csi-node-driver-dbllw" May 13 00:50:29.274061 kubelet[2213]: E0513 00:50:29.274033 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.274211 kubelet[2213]: W0513 00:50:29.274192 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.274294 kubelet[2213]: E0513 00:50:29.274277 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.274572 kubelet[2213]: E0513 00:50:29.274562 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.274658 kubelet[2213]: W0513 00:50:29.274642 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.274730 kubelet[2213]: E0513 00:50:29.274716 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.276249 kubelet[2213]: E0513 00:50:29.276217 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.276249 kubelet[2213]: W0513 00:50:29.276244 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.276327 kubelet[2213]: E0513 00:50:29.276268 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.276507 kubelet[2213]: E0513 00:50:29.276486 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.276507 kubelet[2213]: W0513 00:50:29.276500 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.276507 kubelet[2213]: E0513 00:50:29.276508 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.277080 kubelet[2213]: E0513 00:50:29.277059 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.277080 kubelet[2213]: W0513 00:50:29.277073 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.277159 kubelet[2213]: E0513 00:50:29.277140 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.280807 kubelet[2213]: E0513 00:50:29.280784 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.280807 kubelet[2213]: W0513 00:50:29.280798 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.280807 kubelet[2213]: E0513 00:50:29.280808 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.281013 kubelet[2213]: E0513 00:50:29.280969 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.281013 kubelet[2213]: W0513 00:50:29.280983 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.281013 kubelet[2213]: E0513 00:50:29.280991 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.281108 kubelet[2213]: E0513 00:50:29.281103 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.281134 kubelet[2213]: W0513 00:50:29.281119 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.281134 kubelet[2213]: E0513 00:50:29.281128 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.365019 kubelet[2213]: E0513 00:50:29.364992 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.365019 kubelet[2213]: W0513 00:50:29.365010 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.365176 kubelet[2213]: E0513 00:50:29.365030 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.365278 kubelet[2213]: E0513 00:50:29.365264 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.365278 kubelet[2213]: W0513 00:50:29.365273 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.365327 kubelet[2213]: E0513 00:50:29.365288 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.365491 kubelet[2213]: E0513 00:50:29.365475 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.365491 kubelet[2213]: W0513 00:50:29.365484 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.365561 kubelet[2213]: E0513 00:50:29.365496 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.365734 kubelet[2213]: E0513 00:50:29.365713 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.365734 kubelet[2213]: W0513 00:50:29.365724 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.365786 kubelet[2213]: E0513 00:50:29.365737 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.365998 kubelet[2213]: E0513 00:50:29.365971 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.366043 kubelet[2213]: W0513 00:50:29.365998 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.366043 kubelet[2213]: E0513 00:50:29.366023 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.366236 kubelet[2213]: E0513 00:50:29.366221 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.366236 kubelet[2213]: W0513 00:50:29.366231 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.366309 kubelet[2213]: E0513 00:50:29.366242 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.366440 kubelet[2213]: E0513 00:50:29.366424 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.366440 kubelet[2213]: W0513 00:50:29.366435 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.366440 kubelet[2213]: E0513 00:50:29.366447 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.366719 kubelet[2213]: E0513 00:50:29.366693 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.366719 kubelet[2213]: W0513 00:50:29.366716 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.366789 kubelet[2213]: E0513 00:50:29.366740 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.366919 kubelet[2213]: E0513 00:50:29.366907 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.366919 kubelet[2213]: W0513 00:50:29.366916 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.366980 kubelet[2213]: E0513 00:50:29.366961 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.367110 kubelet[2213]: E0513 00:50:29.367095 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.367110 kubelet[2213]: W0513 00:50:29.367104 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.367190 kubelet[2213]: E0513 00:50:29.367127 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.367247 kubelet[2213]: E0513 00:50:29.367233 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.367247 kubelet[2213]: W0513 00:50:29.367242 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.367332 kubelet[2213]: E0513 00:50:29.367264 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.367414 kubelet[2213]: E0513 00:50:29.367392 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.367414 kubelet[2213]: W0513 00:50:29.367403 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.367580 kubelet[2213]: E0513 00:50:29.367432 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.367580 kubelet[2213]: E0513 00:50:29.367571 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.367580 kubelet[2213]: W0513 00:50:29.367578 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.367671 kubelet[2213]: E0513 00:50:29.367592 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.367767 kubelet[2213]: E0513 00:50:29.367754 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.367767 kubelet[2213]: W0513 00:50:29.367764 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.367816 kubelet[2213]: E0513 00:50:29.367790 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.367971 kubelet[2213]: E0513 00:50:29.367957 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.367971 kubelet[2213]: W0513 00:50:29.367969 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.368028 kubelet[2213]: E0513 00:50:29.367983 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.368157 kubelet[2213]: E0513 00:50:29.368143 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.368157 kubelet[2213]: W0513 00:50:29.368152 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.368229 kubelet[2213]: E0513 00:50:29.368164 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.368351 kubelet[2213]: E0513 00:50:29.368336 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.368351 kubelet[2213]: W0513 00:50:29.368347 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.368423 kubelet[2213]: E0513 00:50:29.368359 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.368514 kubelet[2213]: E0513 00:50:29.368500 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.368514 kubelet[2213]: W0513 00:50:29.368511 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.368560 kubelet[2213]: E0513 00:50:29.368522 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.368719 kubelet[2213]: E0513 00:50:29.368699 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.368719 kubelet[2213]: W0513 00:50:29.368712 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.368802 kubelet[2213]: E0513 00:50:29.368747 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.368885 kubelet[2213]: E0513 00:50:29.368869 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.368885 kubelet[2213]: W0513 00:50:29.368879 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.368980 kubelet[2213]: E0513 00:50:29.368906 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.369100 kubelet[2213]: E0513 00:50:29.369084 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.369100 kubelet[2213]: W0513 00:50:29.369095 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.369173 kubelet[2213]: E0513 00:50:29.369118 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.369267 kubelet[2213]: E0513 00:50:29.369249 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.369267 kubelet[2213]: W0513 00:50:29.369259 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.369350 kubelet[2213]: E0513 00:50:29.369273 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.369514 kubelet[2213]: E0513 00:50:29.369488 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.369514 kubelet[2213]: W0513 00:50:29.369511 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.369579 kubelet[2213]: E0513 00:50:29.369542 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.369787 kubelet[2213]: E0513 00:50:29.369772 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.369787 kubelet[2213]: W0513 00:50:29.369784 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.369861 kubelet[2213]: E0513 00:50:29.369792 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.370003 kubelet[2213]: E0513 00:50:29.369988 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.370003 kubelet[2213]: W0513 00:50:29.369999 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.370078 kubelet[2213]: E0513 00:50:29.370008 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.377218 kubelet[2213]: E0513 00:50:29.377201 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:50:29.377218 kubelet[2213]: W0513 00:50:29.377216 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:50:29.377298 kubelet[2213]: E0513 00:50:29.377229 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:50:29.405479 kubelet[2213]: E0513 00:50:29.405450 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:29.406232 env[1313]: time="2025-05-13T00:50:29.406194565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54799565d8-8gz79,Uid:9d9833bf-3cb8-47aa-81c8-491e0b913766,Namespace:calico-system,Attempt:0,}" May 13 00:50:29.452691 kubelet[2213]: E0513 00:50:29.452670 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:29.453145 env[1313]: time="2025-05-13T00:50:29.453102318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbfpr,Uid:adac383f-9826-49d1-9670-5eb2ec1f5314,Namespace:calico-system,Attempt:0,}" May 13 00:50:29.685891 env[1313]: time="2025-05-13T00:50:29.685827470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:29.685891 env[1313]: time="2025-05-13T00:50:29.685867295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:29.685891 env[1313]: time="2025-05-13T00:50:29.685878079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:29.686098 env[1313]: time="2025-05-13T00:50:29.686008078Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d649d6005e9617bb0bc474edfe98aeafa1dbae181d564f88a249a282c281e388 pid=2642 runtime=io.containerd.runc.v2 May 13 00:50:29.691168 env[1313]: time="2025-05-13T00:50:29.688874911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:29.691168 env[1313]: time="2025-05-13T00:50:29.688912171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:29.691168 env[1313]: time="2025-05-13T00:50:29.688921682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:29.691168 env[1313]: time="2025-05-13T00:50:29.689082298Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe02e71e3da4312adff0b2232a7befed1640ebb60986b043a51636ba9de73065 pid=2658 runtime=io.containerd.runc.v2 May 13 00:50:29.722463 env[1313]: time="2025-05-13T00:50:29.722414249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbfpr,Uid:adac383f-9826-49d1-9670-5eb2ec1f5314,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe02e71e3da4312adff0b2232a7befed1640ebb60986b043a51636ba9de73065\"" May 13 00:50:29.723003 kubelet[2213]: E0513 00:50:29.722984 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:29.724775 env[1313]: time="2025-05-13T00:50:29.724599735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:50:29.729971 env[1313]: time="2025-05-13T00:50:29.729927577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54799565d8-8gz79,Uid:9d9833bf-3cb8-47aa-81c8-491e0b913766,Namespace:calico-system,Attempt:0,} returns sandbox id \"d649d6005e9617bb0bc474edfe98aeafa1dbae181d564f88a249a282c281e388\"" May 13 00:50:29.730520 kubelet[2213]: E0513 00:50:29.730492 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:30.028000 audit[2720]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:30.028000 audit[2720]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffe4ea29970 a2=0 a3=7ffe4ea2995c items=0 ppid=2377 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:30.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:30.033000 audit[2720]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:30.033000 audit[2720]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4ea29970 a2=0 a3=0 items=0 ppid=2377 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:30.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:31.056220 kubelet[2213]: E0513 00:50:31.056170 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:31.779315 env[1313]: time="2025-05-13T00:50:31.779266275Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:31.781170 env[1313]: time="2025-05-13T00:50:31.781118992Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:31.782685 env[1313]: time="2025-05-13T00:50:31.782653021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:31.784028 env[1313]: time="2025-05-13T00:50:31.783983165Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:31.784519 env[1313]: time="2025-05-13T00:50:31.784486287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 00:50:31.785497 env[1313]: time="2025-05-13T00:50:31.785334324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 00:50:31.786436 env[1313]: time="2025-05-13T00:50:31.786406939Z" level=info msg="CreateContainer within sandbox \"fe02e71e3da4312adff0b2232a7befed1640ebb60986b043a51636ba9de73065\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:50:31.801270 env[1313]: time="2025-05-13T00:50:31.801230652Z" level=info msg="CreateContainer within sandbox \"fe02e71e3da4312adff0b2232a7befed1640ebb60986b043a51636ba9de73065\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c1c2126e9458a45cee0209035983e471dbbe2646411ca1f7310c43a3ed62830\"" May 13 00:50:31.801715 env[1313]: time="2025-05-13T00:50:31.801684018Z" level=info msg="StartContainer for \"6c1c2126e9458a45cee0209035983e471dbbe2646411ca1f7310c43a3ed62830\"" May 13 00:50:31.868835 env[1313]: time="2025-05-13T00:50:31.868772186Z" level=info msg="StartContainer for \"6c1c2126e9458a45cee0209035983e471dbbe2646411ca1f7310c43a3ed62830\" returns successfully" May 13 00:50:31.884487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c1c2126e9458a45cee0209035983e471dbbe2646411ca1f7310c43a3ed62830-rootfs.mount: Deactivated successfully. May 13 00:50:31.903605 env[1313]: time="2025-05-13T00:50:31.903568955Z" level=info msg="shim disconnected" id=6c1c2126e9458a45cee0209035983e471dbbe2646411ca1f7310c43a3ed62830 May 13 00:50:31.903736 env[1313]: time="2025-05-13T00:50:31.903698631Z" level=warning msg="cleaning up after shim disconnected" id=6c1c2126e9458a45cee0209035983e471dbbe2646411ca1f7310c43a3ed62830 namespace=k8s.io May 13 00:50:31.903736 env[1313]: time="2025-05-13T00:50:31.903719897Z" level=info msg="cleaning up dead shim" May 13 00:50:31.910089 env[1313]: time="2025-05-13T00:50:31.910044993Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2765 runtime=io.containerd.runc.v2\n" May 13 00:50:32.100515 kubelet[2213]: E0513 00:50:32.100388 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:32.884109 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:39964.service. May 13 00:50:32.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.140:22-10.0.0.1:39964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:32.916000 audit[2790]: USER_ACCT pid=2790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:32.917693 sshd[2790]: Accepted publickey for core from 10.0.0.1 port 39964 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:50:32.917000 audit[2790]: CRED_ACQ pid=2790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:32.917000 audit[2790]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf6579050 a2=3 a3=0 items=0 ppid=1 pid=2790 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:32.917000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:50:32.918662 sshd[2790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:50:32.921931 systemd-logind[1296]: New session 8 of user core. May 13 00:50:32.922585 systemd[1]: Started session-8.scope. May 13 00:50:32.926000 audit[2790]: USER_START pid=2790 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:32.927000 audit[2793]: CRED_ACQ pid=2793 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:33.025929 sshd[2790]: pam_unix(sshd:session): session closed for user core May 13 00:50:33.025000 audit[2790]: USER_END pid=2790 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:33.025000 audit[2790]: CRED_DISP pid=2790 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:33.028282 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:39964.service: Deactivated successfully. May 13 00:50:33.028984 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:50:33.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.140:22-10.0.0.1:39964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:33.029524 systemd-logind[1296]: Session 8 logged out. Waiting for processes to exit. May 13 00:50:33.030183 systemd-logind[1296]: Removed session 8. May 13 00:50:33.055444 kubelet[2213]: E0513 00:50:33.055413 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:34.895464 env[1313]: time="2025-05-13T00:50:34.895419847Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:34.897248 env[1313]: time="2025-05-13T00:50:34.897223657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:34.899028 env[1313]: time="2025-05-13T00:50:34.899007707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:34.900381 env[1313]: time="2025-05-13T00:50:34.900345298Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:34.900779 env[1313]: time="2025-05-13T00:50:34.900742153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 00:50:34.901741 env[1313]: time="2025-05-13T00:50:34.901687069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:50:34.911054 env[1313]: time="2025-05-13T00:50:34.910925184Z" level=info msg="CreateContainer within sandbox \"d649d6005e9617bb0bc474edfe98aeafa1dbae181d564f88a249a282c281e388\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:50:35.242819 kubelet[2213]: E0513 00:50:35.242764 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:35.261814 env[1313]: time="2025-05-13T00:50:35.261767981Z" level=info msg="CreateContainer within sandbox \"d649d6005e9617bb0bc474edfe98aeafa1dbae181d564f88a249a282c281e388\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"835681eb45e84ac05c328abad1cbfb7db6b7a1800335bbabcd56c87cd2028ab2\"" May 13 00:50:35.262278 env[1313]: time="2025-05-13T00:50:35.262231341Z" level=info msg="StartContainer for \"835681eb45e84ac05c328abad1cbfb7db6b7a1800335bbabcd56c87cd2028ab2\"" May 13 00:50:35.313667 env[1313]: time="2025-05-13T00:50:35.313631092Z" level=info msg="StartContainer for \"835681eb45e84ac05c328abad1cbfb7db6b7a1800335bbabcd56c87cd2028ab2\" returns successfully" May 13 00:50:36.110692 kubelet[2213]: E0513 00:50:36.110661 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:36.122716 kubelet[2213]: I0513 00:50:36.122665 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54799565d8-8gz79" podStartSLOduration=1.9520509929999998 podStartE2EDuration="7.12264781s" podCreationTimestamp="2025-05-13 00:50:29 +0000 UTC" firstStartedPulling="2025-05-13 00:50:29.730917459 +0000 UTC m=+21.747056072" lastFinishedPulling="2025-05-13 00:50:34.901514266 +0000 UTC m=+26.917652889" observedRunningTime="2025-05-13 00:50:36.121920062 +0000 UTC m=+28.138058675" watchObservedRunningTime="2025-05-13 00:50:36.12264781 +0000 UTC m=+28.138786423" May 13 00:50:36.140000 audit[2847]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:36.142256 kernel: kauditd_printk_skb: 19 callbacks suppressed May 13 00:50:36.142383 kernel: audit: type=1325 audit(1747097436.140:291): table=filter:95 family=2 entries=17 op=nft_register_rule pid=2847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:36.140000 audit[2847]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc0b2e3eb0 a2=0 a3=7ffc0b2e3e9c items=0 ppid=2377 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:36.149178 kernel: audit: type=1300 audit(1747097436.140:291): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc0b2e3eb0 a2=0 a3=7ffc0b2e3e9c items=0 ppid=2377 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:36.149219 kernel: audit: type=1327 audit(1747097436.140:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:36.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:36.152000 audit[2847]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:36.152000 audit[2847]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc0b2e3eb0 a2=0 a3=7ffc0b2e3e9c items=0 ppid=2377 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:36.160407 kernel: audit: type=1325 audit(1747097436.152:292): table=nat:96 family=2 entries=19 op=nft_register_chain pid=2847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:36.160483 kernel: audit: type=1300 audit(1747097436.152:292): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc0b2e3eb0 a2=0 a3=7ffc0b2e3e9c items=0 ppid=2377 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:36.160505 kernel: audit: type=1327 audit(1747097436.152:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:36.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:37.055871 kubelet[2213]: E0513 00:50:37.055815 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:37.112278 kubelet[2213]: E0513 00:50:37.112243 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:38.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.140:22-10.0.0.1:56488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:38.030438 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:56488.service. May 13 00:50:38.034968 kernel: audit: type=1130 audit(1747097438.029:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.140:22-10.0.0.1:56488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:38.113255 kubelet[2213]: E0513 00:50:38.113221 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:38.534000 audit[2848]: USER_ACCT pid=2848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:38.535379 sshd[2848]: Accepted publickey for core from 10.0.0.1 port 56488 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:50:38.545787 kernel: audit: type=1101 audit(1747097438.534:294): pid=2848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:38.545913 kernel: audit: type=1103 audit(1747097438.538:295): pid=2848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:38.545954 kernel: audit: type=1006 audit(1747097438.538:296): pid=2848 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 May 13 00:50:38.538000 audit[2848]: CRED_ACQ pid=2848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:38.544999 systemd[1]: Started session-9.scope. May 13 00:50:38.539762 sshd[2848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:50:38.546020 systemd-logind[1296]: New session 9 of user core. May 13 00:50:38.538000 audit[2848]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb9ac3d40 a2=3 a3=0 items=0 ppid=1 pid=2848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:38.538000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:50:38.550000 audit[2848]: USER_START pid=2848 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:38.551000 audit[2851]: CRED_ACQ pid=2851 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:38.687347 sshd[2848]: pam_unix(sshd:session): session closed for user core May 13 00:50:38.687000 audit[2848]: USER_END pid=2848 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:38.687000 audit[2848]: CRED_DISP pid=2848 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:38.689713 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:56488.service: Deactivated successfully. May 13 00:50:38.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.140:22-10.0.0.1:56488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:38.690708 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:50:38.691127 systemd-logind[1296]: Session 9 logged out. Waiting for processes to exit. May 13 00:50:38.692165 systemd-logind[1296]: Removed session 9. May 13 00:50:39.056465 kubelet[2213]: E0513 00:50:39.056426 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:41.055170 kubelet[2213]: E0513 00:50:41.055126 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:41.411713 env[1313]: time="2025-05-13T00:50:41.411603910Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:41.448917 env[1313]: time="2025-05-13T00:50:41.448844777Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:41.451978 env[1313]: time="2025-05-13T00:50:41.451908045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:41.453573 env[1313]: time="2025-05-13T00:50:41.453525785Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:41.453974 env[1313]: time="2025-05-13T00:50:41.453911818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 00:50:41.455910 env[1313]: time="2025-05-13T00:50:41.455868604Z" level=info msg="CreateContainer within sandbox \"fe02e71e3da4312adff0b2232a7befed1640ebb60986b043a51636ba9de73065\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:50:41.685779 env[1313]: time="2025-05-13T00:50:41.685710313Z" level=info msg="CreateContainer within sandbox \"fe02e71e3da4312adff0b2232a7befed1640ebb60986b043a51636ba9de73065\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"42ca0924ae7b07b9afb088fad753be6c98743cf20d3378b2b00b653be6b5908a\"" May 13 00:50:41.686025 env[1313]: time="2025-05-13T00:50:41.686002031Z" level=info msg="StartContainer for \"42ca0924ae7b07b9afb088fad753be6c98743cf20d3378b2b00b653be6b5908a\"" May 13 00:50:42.253422 env[1313]: time="2025-05-13T00:50:42.253356568Z" level=info msg="StartContainer for \"42ca0924ae7b07b9afb088fad753be6c98743cf20d3378b2b00b653be6b5908a\" returns successfully" May 13 00:50:42.255657 kubelet[2213]: E0513 00:50:42.255628 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:43.055562 kubelet[2213]: E0513 00:50:43.055525 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:43.257401 kubelet[2213]: E0513 00:50:43.257362 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:43.470133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42ca0924ae7b07b9afb088fad753be6c98743cf20d3378b2b00b653be6b5908a-rootfs.mount: Deactivated successfully. May 13 00:50:43.472455 env[1313]: time="2025-05-13T00:50:43.472413307Z" level=info msg="shim disconnected" id=42ca0924ae7b07b9afb088fad753be6c98743cf20d3378b2b00b653be6b5908a May 13 00:50:43.472685 env[1313]: time="2025-05-13T00:50:43.472457127Z" level=warning msg="cleaning up after shim disconnected" id=42ca0924ae7b07b9afb088fad753be6c98743cf20d3378b2b00b653be6b5908a namespace=k8s.io May 13 00:50:43.472685 env[1313]: time="2025-05-13T00:50:43.472466586Z" level=info msg="cleaning up dead shim" May 13 00:50:43.478418 env[1313]: time="2025-05-13T00:50:43.478380623Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2911 runtime=io.containerd.runc.v2\n" May 13 00:50:43.493007 kubelet[2213]: I0513 00:50:43.492980 2213 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:50:43.508921 kubelet[2213]: I0513 00:50:43.508875 2213 topology_manager.go:215] "Topology Admit Handler" podUID="ad75be90-a580-4409-ab7f-57d0bc34975e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-95lft" May 13 00:50:43.512896 kubelet[2213]: I0513 00:50:43.512847 2213 topology_manager.go:215] "Topology Admit Handler" podUID="d483d5d7-194a-4438-b970-a2e8097bf20a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8w5v4" May 13 00:50:43.513070 kubelet[2213]: I0513 00:50:43.513017 2213 topology_manager.go:215] "Topology Admit Handler" podUID="16ab6220-b9fb-42eb-b90d-d41f68bb7889" podNamespace="calico-apiserver" podName="calico-apiserver-857fcd798-fpxlt" May 13 00:50:43.513132 kubelet[2213]: I0513 00:50:43.513111 2213 topology_manager.go:215] "Topology Admit Handler" podUID="575e7f4c-4b2b-4b60-8634-168da3235e29" podNamespace="calico-apiserver" podName="calico-apiserver-857fcd798-pcg4l" May 13 00:50:43.513641 kubelet[2213]: I0513 00:50:43.513614 2213 topology_manager.go:215] "Topology Admit Handler" podUID="717e1a73-0b5d-4ee5-9bae-65be581845ed" podNamespace="calico-system" podName="calico-kube-controllers-f548f5c9b-2tczf" May 13 00:50:43.560312 kubelet[2213]: I0513 00:50:43.560273 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d483d5d7-194a-4438-b970-a2e8097bf20a-config-volume\") pod \"coredns-7db6d8ff4d-8w5v4\" (UID: \"d483d5d7-194a-4438-b970-a2e8097bf20a\") " pod="kube-system/coredns-7db6d8ff4d-8w5v4" May 13 00:50:43.560479 kubelet[2213]: I0513 00:50:43.560336 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/575e7f4c-4b2b-4b60-8634-168da3235e29-calico-apiserver-certs\") pod \"calico-apiserver-857fcd798-pcg4l\" (UID: \"575e7f4c-4b2b-4b60-8634-168da3235e29\") " pod="calico-apiserver/calico-apiserver-857fcd798-pcg4l" May 13 00:50:43.560479 kubelet[2213]: I0513 00:50:43.560370 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5np24\" (UniqueName: \"kubernetes.io/projected/575e7f4c-4b2b-4b60-8634-168da3235e29-kube-api-access-5np24\") pod \"calico-apiserver-857fcd798-pcg4l\" (UID: \"575e7f4c-4b2b-4b60-8634-168da3235e29\") " pod="calico-apiserver/calico-apiserver-857fcd798-pcg4l" May 13 00:50:43.560479 kubelet[2213]: I0513 00:50:43.560399 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717e1a73-0b5d-4ee5-9bae-65be581845ed-tigera-ca-bundle\") pod \"calico-kube-controllers-f548f5c9b-2tczf\" (UID: \"717e1a73-0b5d-4ee5-9bae-65be581845ed\") " pod="calico-system/calico-kube-controllers-f548f5c9b-2tczf" May 13 00:50:43.560552 kubelet[2213]: I0513 00:50:43.560468 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad75be90-a580-4409-ab7f-57d0bc34975e-config-volume\") pod \"coredns-7db6d8ff4d-95lft\" (UID: \"ad75be90-a580-4409-ab7f-57d0bc34975e\") " pod="kube-system/coredns-7db6d8ff4d-95lft" May 13 00:50:43.560552 kubelet[2213]: I0513 00:50:43.560541 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5p2q\" (UniqueName: \"kubernetes.io/projected/ad75be90-a580-4409-ab7f-57d0bc34975e-kube-api-access-g5p2q\") pod \"coredns-7db6d8ff4d-95lft\" (UID: \"ad75be90-a580-4409-ab7f-57d0bc34975e\") " pod="kube-system/coredns-7db6d8ff4d-95lft" May 13 00:50:43.560605 kubelet[2213]: I0513 00:50:43.560566 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8zc\" (UniqueName: \"kubernetes.io/projected/717e1a73-0b5d-4ee5-9bae-65be581845ed-kube-api-access-nz8zc\") pod \"calico-kube-controllers-f548f5c9b-2tczf\" (UID: \"717e1a73-0b5d-4ee5-9bae-65be581845ed\") " pod="calico-system/calico-kube-controllers-f548f5c9b-2tczf" May 13 00:50:43.560605 kubelet[2213]: I0513 00:50:43.560594 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-667ws\" (UniqueName: \"kubernetes.io/projected/d483d5d7-194a-4438-b970-a2e8097bf20a-kube-api-access-667ws\") pod \"coredns-7db6d8ff4d-8w5v4\" (UID: \"d483d5d7-194a-4438-b970-a2e8097bf20a\") " pod="kube-system/coredns-7db6d8ff4d-8w5v4" May 13 00:50:43.560654 kubelet[2213]: I0513 00:50:43.560618 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16ab6220-b9fb-42eb-b90d-d41f68bb7889-calico-apiserver-certs\") pod \"calico-apiserver-857fcd798-fpxlt\" (UID: \"16ab6220-b9fb-42eb-b90d-d41f68bb7889\") " pod="calico-apiserver/calico-apiserver-857fcd798-fpxlt" May 13 00:50:43.560654 kubelet[2213]: I0513 00:50:43.560638 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb5m4\" (UniqueName: \"kubernetes.io/projected/16ab6220-b9fb-42eb-b90d-d41f68bb7889-kube-api-access-rb5m4\") pod \"calico-apiserver-857fcd798-fpxlt\" (UID: \"16ab6220-b9fb-42eb-b90d-d41f68bb7889\") " pod="calico-apiserver/calico-apiserver-857fcd798-fpxlt" May 13 00:50:43.690576 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:41518.service. May 13 00:50:43.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.140:22-10.0.0.1:41518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:43.691666 kernel: kauditd_printk_skb: 7 callbacks suppressed May 13 00:50:43.691785 kernel: audit: type=1130 audit(1747097443.689:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.140:22-10.0.0.1:41518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:43.720000 audit[2933]: USER_ACCT pid=2933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.721415 sshd[2933]: Accepted publickey for core from 10.0.0.1 port 41518 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:50:43.723490 sshd[2933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:50:43.722000 audit[2933]: CRED_ACQ pid=2933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.727288 systemd-logind[1296]: New session 10 of user core. May 13 00:50:43.728019 systemd[1]: Started session-10.scope. May 13 00:50:43.729133 kernel: audit: type=1101 audit(1747097443.720:303): pid=2933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.729177 kernel: audit: type=1103 audit(1747097443.722:304): pid=2933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.731445 kernel: audit: type=1006 audit(1747097443.722:305): pid=2933 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 13 00:50:43.722000 audit[2933]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd09cf25b0 a2=3 a3=0 items=0 ppid=1 pid=2933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:43.735443 kernel: audit: type=1300 audit(1747097443.722:305): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd09cf25b0 a2=3 a3=0 items=0 ppid=1 pid=2933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:43.735506 kernel: audit: type=1327 audit(1747097443.722:305): proctitle=737368643A20636F7265205B707269765D May 13 00:50:43.722000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:50:43.736775 kernel: audit: type=1105 audit(1747097443.732:306): pid=2933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.732000 audit[2933]: USER_START pid=2933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.740867 kernel: audit: type=1103 audit(1747097443.733:307): pid=2936 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.733000 audit[2936]: CRED_ACQ pid=2936 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.814192 kubelet[2213]: E0513 00:50:43.814153 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:43.814896 env[1313]: time="2025-05-13T00:50:43.814623823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-95lft,Uid:ad75be90-a580-4409-ab7f-57d0bc34975e,Namespace:kube-system,Attempt:0,}" May 13 00:50:43.822079 env[1313]: time="2025-05-13T00:50:43.822038173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857fcd798-pcg4l,Uid:575e7f4c-4b2b-4b60-8634-168da3235e29,Namespace:calico-apiserver,Attempt:0,}" May 13 00:50:43.824528 env[1313]: time="2025-05-13T00:50:43.823372438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857fcd798-fpxlt,Uid:16ab6220-b9fb-42eb-b90d-d41f68bb7889,Namespace:calico-apiserver,Attempt:0,}" May 13 00:50:43.824573 kubelet[2213]: E0513 00:50:43.824108 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:43.825427 env[1313]: time="2025-05-13T00:50:43.825392723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f548f5c9b-2tczf,Uid:717e1a73-0b5d-4ee5-9bae-65be581845ed,Namespace:calico-system,Attempt:0,}" May 13 00:50:43.825644 env[1313]: time="2025-05-13T00:50:43.825624356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8w5v4,Uid:d483d5d7-194a-4438-b970-a2e8097bf20a,Namespace:kube-system,Attempt:0,}" May 13 00:50:43.837489 sshd[2933]: pam_unix(sshd:session): session closed for user core May 13 00:50:43.837000 audit[2933]: USER_END pid=2933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.843986 kernel: audit: type=1106 audit(1747097443.837:308): pid=2933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.840112 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:41518.service: Deactivated successfully. May 13 00:50:43.841404 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:50:43.841739 systemd-logind[1296]: Session 10 logged out. Waiting for processes to exit. May 13 00:50:43.842741 systemd-logind[1296]: Removed session 10. May 13 00:50:43.837000 audit[2933]: CRED_DISP pid=2933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.140:22-10.0.0.1:41518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:43.849155 kernel: audit: type=1104 audit(1747097443.837:309): pid=2933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:43.918891 env[1313]: time="2025-05-13T00:50:43.918819046Z" level=error msg="Failed to destroy network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.919203 env[1313]: time="2025-05-13T00:50:43.919171766Z" level=error msg="encountered an error cleaning up failed sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.919269 env[1313]: time="2025-05-13T00:50:43.919218881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857fcd798-fpxlt,Uid:16ab6220-b9fb-42eb-b90d-d41f68bb7889,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.919481 kubelet[2213]: E0513 00:50:43.919439 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.919569 kubelet[2213]: E0513 00:50:43.919510 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857fcd798-fpxlt" May 13 00:50:43.919569 kubelet[2213]: E0513 00:50:43.919532 2213 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857fcd798-fpxlt" May 13 00:50:43.919627 kubelet[2213]: E0513 00:50:43.919595 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-857fcd798-fpxlt_calico-apiserver(16ab6220-b9fb-42eb-b90d-d41f68bb7889)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-857fcd798-fpxlt_calico-apiserver(16ab6220-b9fb-42eb-b90d-d41f68bb7889)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857fcd798-fpxlt" podUID="16ab6220-b9fb-42eb-b90d-d41f68bb7889" May 13 00:50:43.922083 env[1313]: time="2025-05-13T00:50:43.922042087Z" level=error msg="Failed to destroy network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.922711 env[1313]: time="2025-05-13T00:50:43.922657134Z" level=error msg="encountered an error cleaning up failed sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.922880 env[1313]: time="2025-05-13T00:50:43.922850869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-95lft,Uid:ad75be90-a580-4409-ab7f-57d0bc34975e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.923111 kubelet[2213]: E0513 00:50:43.923092 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.923178 kubelet[2213]: E0513 00:50:43.923131 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-95lft" May 13 00:50:43.923178 kubelet[2213]: E0513 00:50:43.923145 2213 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-95lft" May 13 00:50:43.923236 kubelet[2213]: E0513 00:50:43.923173 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-95lft_kube-system(ad75be90-a580-4409-ab7f-57d0bc34975e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-95lft_kube-system(ad75be90-a580-4409-ab7f-57d0bc34975e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-95lft" podUID="ad75be90-a580-4409-ab7f-57d0bc34975e" May 13 00:50:43.930772 env[1313]: time="2025-05-13T00:50:43.930717433Z" level=error msg="Failed to destroy network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.931074 env[1313]: time="2025-05-13T00:50:43.931044580Z" level=error msg="encountered an error cleaning up failed sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.931141 env[1313]: time="2025-05-13T00:50:43.931088741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8w5v4,Uid:d483d5d7-194a-4438-b970-a2e8097bf20a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.931544 kubelet[2213]: E0513 00:50:43.931259 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.931544 kubelet[2213]: E0513 00:50:43.931316 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8w5v4" May 13 00:50:43.931544 kubelet[2213]: E0513 00:50:43.931333 2213 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8w5v4" May 13 00:50:43.931725 kubelet[2213]: E0513 00:50:43.931364 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8w5v4_kube-system(d483d5d7-194a-4438-b970-a2e8097bf20a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8w5v4_kube-system(d483d5d7-194a-4438-b970-a2e8097bf20a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8w5v4" podUID="d483d5d7-194a-4438-b970-a2e8097bf20a" May 13 00:50:43.939214 env[1313]: time="2025-05-13T00:50:43.939162598Z" level=error msg="Failed to destroy network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.939510 env[1313]: time="2025-05-13T00:50:43.939475758Z" level=error msg="encountered an error cleaning up failed sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.939583 env[1313]: time="2025-05-13T00:50:43.939521992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f548f5c9b-2tczf,Uid:717e1a73-0b5d-4ee5-9bae-65be581845ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.939746 kubelet[2213]: E0513 00:50:43.939704 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.939797 kubelet[2213]: E0513 00:50:43.939760 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f548f5c9b-2tczf" May 13 00:50:43.939797 kubelet[2213]: E0513 00:50:43.939780 2213 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f548f5c9b-2tczf" May 13 00:50:43.939846 kubelet[2213]: E0513 00:50:43.939817 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f548f5c9b-2tczf_calico-system(717e1a73-0b5d-4ee5-9bae-65be581845ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f548f5c9b-2tczf_calico-system(717e1a73-0b5d-4ee5-9bae-65be581845ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f548f5c9b-2tczf" podUID="717e1a73-0b5d-4ee5-9bae-65be581845ed" May 13 00:50:43.948955 env[1313]: time="2025-05-13T00:50:43.948899982Z" level=error msg="Failed to destroy network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.949222 env[1313]: time="2025-05-13T00:50:43.949195916Z" level=error msg="encountered an error cleaning up failed sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.949282 env[1313]: time="2025-05-13T00:50:43.949234005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857fcd798-pcg4l,Uid:575e7f4c-4b2b-4b60-8634-168da3235e29,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.949458 kubelet[2213]: E0513 00:50:43.949413 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:43.949517 kubelet[2213]: E0513 00:50:43.949467 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857fcd798-pcg4l" May 13 00:50:43.949517 kubelet[2213]: E0513 00:50:43.949486 2213 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857fcd798-pcg4l" May 13 00:50:43.949567 kubelet[2213]: E0513 00:50:43.949521 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-857fcd798-pcg4l_calico-apiserver(575e7f4c-4b2b-4b60-8634-168da3235e29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-857fcd798-pcg4l_calico-apiserver(575e7f4c-4b2b-4b60-8634-168da3235e29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857fcd798-pcg4l" podUID="575e7f4c-4b2b-4b60-8634-168da3235e29" May 13 00:50:44.259685 kubelet[2213]: I0513 00:50:44.259650 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:50:44.260189 env[1313]: time="2025-05-13T00:50:44.260156178Z" level=info msg="StopPodSandbox for \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\"" May 13 00:50:44.262450 kubelet[2213]: E0513 00:50:44.261980 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:44.263074 env[1313]: time="2025-05-13T00:50:44.263039282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:50:44.263789 kubelet[2213]: I0513 00:50:44.263768 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:50:44.267294 env[1313]: time="2025-05-13T00:50:44.267250492Z" level=info msg="StopPodSandbox for \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\"" May 13 00:50:44.271464 kubelet[2213]: I0513 00:50:44.271106 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:50:44.271589 env[1313]: time="2025-05-13T00:50:44.271552517Z" level=info msg="StopPodSandbox for \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\"" May 13 00:50:44.272282 kubelet[2213]: I0513 00:50:44.272251 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:50:44.273426 env[1313]: time="2025-05-13T00:50:44.273395632Z" level=info msg="StopPodSandbox for \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\"" May 13 00:50:44.275928 kubelet[2213]: I0513 00:50:44.275906 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:50:44.276417 env[1313]: time="2025-05-13T00:50:44.276380704Z" level=info msg="StopPodSandbox for \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\"" May 13 00:50:44.291538 env[1313]: time="2025-05-13T00:50:44.291488233Z" level=error msg="StopPodSandbox for \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\" failed" error="failed to destroy network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:44.291910 kubelet[2213]: E0513 00:50:44.291870 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:50:44.291989 kubelet[2213]: E0513 00:50:44.291926 2213 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d"} May 13 00:50:44.292018 kubelet[2213]: E0513 00:50:44.291992 2213 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad75be90-a580-4409-ab7f-57d0bc34975e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:50:44.292079 kubelet[2213]: E0513 00:50:44.292016 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad75be90-a580-4409-ab7f-57d0bc34975e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-95lft" podUID="ad75be90-a580-4409-ab7f-57d0bc34975e" May 13 00:50:44.303261 env[1313]: time="2025-05-13T00:50:44.303211977Z" level=error msg="StopPodSandbox for \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\" failed" error="failed to destroy network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:44.303711 kubelet[2213]: E0513 00:50:44.303580 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:50:44.303711 kubelet[2213]: E0513 00:50:44.303624 2213 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce"} May 13 00:50:44.303711 kubelet[2213]: E0513 00:50:44.303656 2213 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"575e7f4c-4b2b-4b60-8634-168da3235e29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:50:44.303711 kubelet[2213]: E0513 00:50:44.303677 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"575e7f4c-4b2b-4b60-8634-168da3235e29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857fcd798-pcg4l" podUID="575e7f4c-4b2b-4b60-8634-168da3235e29" May 13 00:50:44.304049 env[1313]: time="2025-05-13T00:50:44.304011636Z" level=error msg="StopPodSandbox for \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\" failed" error="failed to destroy network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:44.304235 kubelet[2213]: E0513 00:50:44.304110 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:50:44.304235 kubelet[2213]: E0513 00:50:44.304131 2213 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464"} May 13 00:50:44.304235 kubelet[2213]: E0513 00:50:44.304150 2213 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d483d5d7-194a-4438-b970-a2e8097bf20a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:50:44.304235 kubelet[2213]: E0513 00:50:44.304165 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d483d5d7-194a-4438-b970-a2e8097bf20a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8w5v4" podUID="d483d5d7-194a-4438-b970-a2e8097bf20a" May 13 00:50:44.311611 env[1313]: time="2025-05-13T00:50:44.311568633Z" level=error msg="StopPodSandbox for \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\" failed" error="failed to destroy network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:44.311965 kubelet[2213]: E0513 00:50:44.311913 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:50:44.312029 kubelet[2213]: E0513 00:50:44.311977 2213 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b"} May 13 00:50:44.312029 kubelet[2213]: E0513 00:50:44.312017 2213 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16ab6220-b9fb-42eb-b90d-d41f68bb7889\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:50:44.312113 kubelet[2213]: E0513 00:50:44.312040 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16ab6220-b9fb-42eb-b90d-d41f68bb7889\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857fcd798-fpxlt" podUID="16ab6220-b9fb-42eb-b90d-d41f68bb7889" May 13 00:50:44.324118 env[1313]: time="2025-05-13T00:50:44.324053178Z" level=error msg="StopPodSandbox for \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\" failed" error="failed to destroy network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:44.324306 kubelet[2213]: E0513 00:50:44.324277 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:50:44.324370 kubelet[2213]: E0513 00:50:44.324310 2213 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021"} May 13 00:50:44.324370 kubelet[2213]: E0513 00:50:44.324333 2213 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"717e1a73-0b5d-4ee5-9bae-65be581845ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:50:44.324370 kubelet[2213]: E0513 00:50:44.324359 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"717e1a73-0b5d-4ee5-9bae-65be581845ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f548f5c9b-2tczf" podUID="717e1a73-0b5d-4ee5-9bae-65be581845ed" May 13 00:50:45.057568 env[1313]: time="2025-05-13T00:50:45.057521635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbllw,Uid:8e52f16f-64af-4b4e-a240-a749e7055c20,Namespace:calico-system,Attempt:0,}" May 13 00:50:45.118541 env[1313]: time="2025-05-13T00:50:45.118485107Z" level=error msg="Failed to destroy network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:45.118835 env[1313]: time="2025-05-13T00:50:45.118807773Z" level=error msg="encountered an error cleaning up failed sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:45.118879 env[1313]: time="2025-05-13T00:50:45.118852394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbllw,Uid:8e52f16f-64af-4b4e-a240-a749e7055c20,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:45.119084 kubelet[2213]: E0513 00:50:45.119046 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:45.119149 kubelet[2213]: E0513 00:50:45.119107 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbllw" May 13 00:50:45.119149 kubelet[2213]: E0513 00:50:45.119125 2213 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbllw" May 13 00:50:45.119201 kubelet[2213]: E0513 00:50:45.119163 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbllw_calico-system(8e52f16f-64af-4b4e-a240-a749e7055c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbllw_calico-system(8e52f16f-64af-4b4e-a240-a749e7055c20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:45.120762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205-shm.mount: Deactivated successfully. May 13 00:50:45.277983 kubelet[2213]: I0513 00:50:45.277687 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:50:45.278301 env[1313]: time="2025-05-13T00:50:45.278252828Z" level=info msg="StopPodSandbox for \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\"" May 13 00:50:45.302125 env[1313]: time="2025-05-13T00:50:45.302054033Z" level=error msg="StopPodSandbox for \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\" failed" error="failed to destroy network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:50:45.302330 kubelet[2213]: E0513 00:50:45.302282 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:50:45.302396 kubelet[2213]: E0513 00:50:45.302337 2213 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205"} May 13 00:50:45.302396 kubelet[2213]: E0513 00:50:45.302368 2213 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e52f16f-64af-4b4e-a240-a749e7055c20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:50:45.302501 kubelet[2213]: E0513 00:50:45.302392 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e52f16f-64af-4b4e-a240-a749e7055c20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbllw" podUID="8e52f16f-64af-4b4e-a240-a749e7055c20" May 13 00:50:48.840658 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:41532.service. May 13 00:50:48.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.140:22-10.0.0.1:41532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:48.842331 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 00:50:48.842385 kernel: audit: type=1130 audit(1747097448.838:311): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.140:22-10.0.0.1:41532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:48.870000 audit[3316]: USER_ACCT pid=3316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:48.872798 sshd[3316]: Accepted publickey for core from 10.0.0.1 port 41532 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:50:48.876012 sshd[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:50:48.873000 audit[3316]: CRED_ACQ pid=3316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:48.881458 systemd-logind[1296]: New session 11 of user core. May 13 00:50:48.882981 kernel: audit: type=1101 audit(1747097448.870:312): pid=3316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:48.883033 kernel: audit: type=1103 audit(1747097448.873:313): pid=3316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:48.883054 kernel: audit: type=1006 audit(1747097448.873:314): pid=3316 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 13 00:50:48.882303 systemd[1]: Started session-11.scope. May 13 00:50:48.873000 audit[3316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6b1cac80 a2=3 a3=0 items=0 ppid=1 pid=3316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:48.889735 kernel: audit: type=1300 audit(1747097448.873:314): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6b1cac80 a2=3 a3=0 items=0 ppid=1 pid=3316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:48.889779 kernel: audit: type=1327 audit(1747097448.873:314): proctitle=737368643A20636F7265205B707269765D May 13 00:50:48.873000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:50:48.885000 audit[3316]: USER_START pid=3316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:48.897079 kernel: audit: type=1105 audit(1747097448.885:315): pid=3316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:48.886000 audit[3319]: CRED_ACQ pid=3319 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:48.902338 kernel: audit: type=1103 audit(1747097448.886:316): pid=3319 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.004933 sshd[3316]: pam_unix(sshd:session): session closed for user core May 13 00:50:49.004000 audit[3316]: USER_END pid=3316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.007474 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:41538.service. May 13 00:50:49.010261 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:41532.service: Deactivated successfully. May 13 00:50:49.012048 kernel: audit: type=1106 audit(1747097449.004:317): pid=3316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.011148 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:50:49.011994 systemd-logind[1296]: Session 11 logged out. Waiting for processes to exit. May 13 00:50:49.004000 audit[3316]: CRED_DISP pid=3316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.012755 systemd-logind[1296]: Removed session 11. May 13 00:50:49.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.140:22-10.0.0.1:41538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:49.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.140:22-10.0.0.1:41532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:49.017973 kernel: audit: type=1104 audit(1747097449.004:318): pid=3316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.036000 audit[3330]: USER_ACCT pid=3330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.038667 sshd[3330]: Accepted publickey for core from 10.0.0.1 port 41538 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:50:49.037000 audit[3330]: CRED_ACQ pid=3330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.037000 audit[3330]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe5250480 a2=3 a3=0 items=0 ppid=1 pid=3330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:49.037000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:50:49.039608 sshd[3330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:50:49.042659 systemd-logind[1296]: New session 12 of user core. May 13 00:50:49.043350 systemd[1]: Started session-12.scope. May 13 00:50:49.046000 audit[3330]: USER_START pid=3330 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.047000 audit[3335]: CRED_ACQ pid=3335 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.628234 sshd[3330]: pam_unix(sshd:session): session closed for user core May 13 00:50:49.628000 audit[3330]: USER_END pid=3330 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.628000 audit[3330]: CRED_DISP pid=3330 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.140:22-10.0.0.1:41548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:49.632884 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:41548.service. May 13 00:50:49.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.140:22-10.0.0.1:41538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:49.648045 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:41538.service: Deactivated successfully. May 13 00:50:49.648741 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:50:49.649924 systemd-logind[1296]: Session 12 logged out. Waiting for processes to exit. May 13 00:50:49.651012 systemd-logind[1296]: Removed session 12. May 13 00:50:49.669000 audit[3344]: USER_ACCT pid=3344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.670494 sshd[3344]: Accepted publickey for core from 10.0.0.1 port 41548 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:50:49.670000 audit[3344]: CRED_ACQ pid=3344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.670000 audit[3344]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe03aea8f0 a2=3 a3=0 items=0 ppid=1 pid=3344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:49.670000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:50:49.671530 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:50:49.674638 systemd-logind[1296]: New session 13 of user core. May 13 00:50:49.675325 systemd[1]: Started session-13.scope. May 13 00:50:49.678000 audit[3344]: USER_START pid=3344 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.679000 audit[3348]: CRED_ACQ pid=3348 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.785265 sshd[3344]: pam_unix(sshd:session): session closed for user core May 13 00:50:49.785000 audit[3344]: USER_END pid=3344 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.785000 audit[3344]: CRED_DISP pid=3344 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:49.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.140:22-10.0.0.1:41548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:49.787646 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:41548.service: Deactivated successfully. May 13 00:50:49.788774 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:50:49.789245 systemd-logind[1296]: Session 13 logged out. Waiting for processes to exit. May 13 00:50:49.790025 systemd-logind[1296]: Removed session 13. May 13 00:50:53.071460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198409179.mount: Deactivated successfully. May 13 00:50:54.058711 env[1313]: time="2025-05-13T00:50:54.058671663Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:54.060877 env[1313]: time="2025-05-13T00:50:54.060837051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:54.062413 env[1313]: time="2025-05-13T00:50:54.062383329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:54.064027 env[1313]: time="2025-05-13T00:50:54.064001311Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:54.064408 env[1313]: time="2025-05-13T00:50:54.064374419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 00:50:54.071531 env[1313]: time="2025-05-13T00:50:54.071501087Z" level=info msg="CreateContainer within sandbox \"fe02e71e3da4312adff0b2232a7befed1640ebb60986b043a51636ba9de73065\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:50:54.087042 env[1313]: time="2025-05-13T00:50:54.087005371Z" level=info msg="CreateContainer within sandbox \"fe02e71e3da4312adff0b2232a7befed1640ebb60986b043a51636ba9de73065\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0ac5ebac017aab5737b2f1f4e20c5d01215e0f5ca32c6567307551803c783f34\"" May 13 00:50:54.087459 env[1313]: time="2025-05-13T00:50:54.087429281Z" level=info msg="StartContainer for \"0ac5ebac017aab5737b2f1f4e20c5d01215e0f5ca32c6567307551803c783f34\"" May 13 00:50:54.192231 env[1313]: time="2025-05-13T00:50:54.192181860Z" level=info msg="StartContainer for \"0ac5ebac017aab5737b2f1f4e20c5d01215e0f5ca32c6567307551803c783f34\" returns successfully" May 13 00:50:54.226365 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:50:54.226492 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:50:54.298196 kubelet[2213]: E0513 00:50:54.298167 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:54.309868 kubelet[2213]: I0513 00:50:54.309772 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dbfpr" podStartSLOduration=0.969074191 podStartE2EDuration="25.309755277s" podCreationTimestamp="2025-05-13 00:50:29 +0000 UTC" firstStartedPulling="2025-05-13 00:50:29.724367344 +0000 UTC m=+21.740505957" lastFinishedPulling="2025-05-13 00:50:54.06504843 +0000 UTC m=+46.081187043" observedRunningTime="2025-05-13 00:50:54.309296307 +0000 UTC m=+46.325434920" watchObservedRunningTime="2025-05-13 00:50:54.309755277 +0000 UTC m=+46.325893890" May 13 00:50:54.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.140:22-10.0.0.1:52690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:54.788722 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:52690.service. May 13 00:50:54.789754 kernel: kauditd_printk_skb: 23 callbacks suppressed May 13 00:50:54.789803 kernel: audit: type=1130 audit(1747097454.787:338): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.140:22-10.0.0.1:52690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:54.819000 audit[3430]: USER_ACCT pid=3430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:54.820556 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 52690 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:50:54.823000 audit[3430]: CRED_ACQ pid=3430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:54.824702 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:50:54.827926 kernel: audit: type=1101 audit(1747097454.819:339): pid=3430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:54.828033 kernel: audit: type=1103 audit(1747097454.823:340): pid=3430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:54.828053 kernel: audit: type=1006 audit(1747097454.823:341): pid=3430 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 13 00:50:54.828257 systemd-logind[1296]: New session 14 of user core. May 13 00:50:54.829060 systemd[1]: Started session-14.scope. May 13 00:50:54.834043 kernel: audit: type=1300 audit(1747097454.823:341): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7ea32220 a2=3 a3=0 items=0 ppid=1 pid=3430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:54.823000 audit[3430]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7ea32220 a2=3 a3=0 items=0 ppid=1 pid=3430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:54.823000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:50:54.835704 kernel: audit: type=1327 audit(1747097454.823:341): proctitle=737368643A20636F7265205B707269765D May 13 00:50:54.835767 kernel: audit: type=1105 audit(1747097454.832:342): pid=3430 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:54.832000 audit[3430]: USER_START pid=3430 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:54.839840 kernel: audit: type=1103 audit(1747097454.833:343): pid=3433 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:54.833000 audit[3433]: CRED_ACQ pid=3433 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:55.054355 sshd[3430]: pam_unix(sshd:session): session closed for user core May 13 00:50:55.054000 audit[3430]: USER_END pid=3430 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:55.057045 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:52690.service: Deactivated successfully. May 13 00:50:55.057830 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:50:55.054000 audit[3430]: CRED_DISP pid=3430 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:55.060211 systemd-logind[1296]: Session 14 logged out. Waiting for processes to exit. May 13 00:50:55.060864 systemd-logind[1296]: Removed session 14. May 13 00:50:55.063316 kernel: audit: type=1106 audit(1747097455.054:344): pid=3430 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:55.063367 kernel: audit: type=1104 audit(1747097455.054:345): pid=3430 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:50:55.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.140:22-10.0.0.1:52690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:50:55.430000 audit[3494]: AVC avc: denied { write } for pid=3494 comm="tee" name="fd" dev="proc" ino=24333 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 00:50:55.430000 audit[3494]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe7fda6a1e a2=241 a3=1b6 items=1 ppid=3461 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.430000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 13 00:50:55.430000 audit: PATH item=0 name="/dev/fd/63" inode=24330 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:50:55.430000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 00:50:55.440000 audit[3516]: AVC avc: denied { write } for pid=3516 comm="tee" name="fd" dev="proc" ino=25161 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 00:50:55.440000 audit[3516]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc3a201a30 a2=241 a3=1b6 items=1 ppid=3453 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.440000 audit: CWD cwd="/etc/service/enabled/cni/log" May 13 00:50:55.440000 audit: PATH item=0 name="/dev/fd/63" inode=23218 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:50:55.440000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 00:50:55.445000 audit[3524]: AVC avc: denied { write } for pid=3524 comm="tee" name="fd" dev="proc" ino=25165 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 00:50:55.445000 audit[3524]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd56adca2e a2=241 a3=1b6 items=1 ppid=3455 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.445000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 13 00:50:55.445000 audit: PATH item=0 name="/dev/fd/63" inode=23223 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:50:55.445000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 00:50:55.446000 audit[3510]: AVC avc: denied { write } for pid=3510 comm="tee" name="fd" dev="proc" ino=23226 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 00:50:55.446000 audit[3510]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea11b3a2f a2=241 a3=1b6 items=1 ppid=3457 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.446000 audit: CWD cwd="/etc/service/enabled/bird/log" May 13 00:50:55.446000 audit: PATH item=0 name="/dev/fd/63" inode=25156 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:50:55.446000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 00:50:55.466000 audit[3529]: AVC avc: denied { write } for pid=3529 comm="tee" name="fd" dev="proc" ino=25171 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 00:50:55.466000 audit[3529]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc6b89ca2e a2=241 a3=1b6 items=1 ppid=3452 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.466000 audit: CWD cwd="/etc/service/enabled/confd/log" May 13 00:50:55.466000 audit: PATH item=0 name="/dev/fd/63" inode=24342 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:50:55.466000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 00:50:55.468000 audit[3509]: AVC avc: denied { write } for pid=3509 comm="tee" name="fd" dev="proc" ino=25969 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 00:50:55.468000 audit[3509]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff60f62a1f a2=241 a3=1b6 items=1 ppid=3454 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.468000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 13 00:50:55.468000 audit: PATH item=0 name="/dev/fd/63" inode=24337 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:50:55.468000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 00:50:55.491000 audit[3533]: AVC avc: denied { write } for pid=3533 comm="tee" name="fd" dev="proc" ino=25177 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 00:50:55.491000 audit[3533]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd777cda2e a2=241 a3=1b6 items=1 ppid=3468 pid=3533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.491000 audit: CWD cwd="/etc/service/enabled/felix/log" May 13 00:50:55.491000 audit: PATH item=0 name="/dev/fd/63" inode=24345 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:50:55.491000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit: BPF prog-id=10 op=LOAD May 13 00:50:55.546000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe124d4420 a2=98 a3=3 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.546000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.546000 audit: BPF prog-id=10 op=UNLOAD May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit: BPF prog-id=11 op=LOAD May 13 00:50:55.546000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe124d4200 a2=74 a3=540051 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.546000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.546000 audit: BPF prog-id=11 op=UNLOAD May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.546000 audit: BPF prog-id=12 op=LOAD May 13 00:50:55.546000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe124d4230 a2=94 a3=2 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.546000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.546000 audit: BPF prog-id=12 op=UNLOAD May 13 00:50:55.647000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit: BPF prog-id=13 op=LOAD May 13 00:50:55.647000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe124d40f0 a2=40 a3=1 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.647000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.647000 audit: BPF prog-id=13 op=UNLOAD May 13 00:50:55.647000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.647000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe124d41c0 a2=50 a3=7ffe124d42a0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.647000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe124d4100 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe124d4130 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe124d4040 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe124d4150 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe124d4130 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe124d4120 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe124d4150 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe124d4130 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe124d4150 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe124d4120 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe124d4190 a2=28 a3=0 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe124d3f40 a2=50 a3=1 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit: BPF prog-id=14 op=LOAD May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe124d3f40 a2=94 a3=5 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit: BPF prog-id=14 op=UNLOAD May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe124d3ff0 a2=50 a3=1 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe124d4110 a2=4 a3=38 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.654000 audit[3570]: AVC avc: denied { confidentiality } for pid=3570 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 00:50:55.654000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe124d4160 a2=94 a3=6 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.655000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { confidentiality } for pid=3570 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 00:50:55.655000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe124d3910 a2=94 a3=83 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.655000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { perfmon } for pid=3570 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { bpf } for pid=3570 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.655000 audit[3570]: AVC avc: denied { confidentiality } for pid=3570 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 00:50:55.655000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe124d3910 a2=94 a3=83 items=0 ppid=3469 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit: BPF prog-id=15 op=LOAD May 13 00:50:55.661000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe6c399d10 a2=98 a3=1999999999999999 items=0 ppid=3469 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.661000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 13 00:50:55.661000 audit: BPF prog-id=15 op=UNLOAD May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit: BPF prog-id=16 op=LOAD May 13 00:50:55.661000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe6c399bf0 a2=74 a3=ffff items=0 ppid=3469 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.661000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 13 00:50:55.661000 audit: BPF prog-id=16 op=UNLOAD May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.661000 audit: BPF prog-id=17 op=LOAD May 13 00:50:55.661000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe6c399c30 a2=40 a3=7ffe6c399e10 items=0 ppid=3469 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.661000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 13 00:50:55.661000 audit: BPF prog-id=17 op=UNLOAD May 13 00:50:55.696882 systemd-networkd[1088]: vxlan.calico: Link UP May 13 00:50:55.696888 systemd-networkd[1088]: vxlan.calico: Gained carrier May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit: BPF prog-id=18 op=LOAD May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4f5b1c50 a2=98 a3=ffffffff items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit: BPF prog-id=18 op=UNLOAD May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit: BPF prog-id=19 op=LOAD May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4f5b1a60 a2=74 a3=540051 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit: BPF prog-id=19 op=UNLOAD May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit: BPF prog-id=20 op=LOAD May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4f5b1a90 a2=94 a3=2 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit: BPF prog-id=20 op=UNLOAD May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd4f5b1960 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4f5b1990 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4f5b18a0 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd4f5b19b0 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd4f5b1990 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd4f5b1980 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd4f5b19b0 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4f5b1990 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4f5b19b0 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4f5b1980 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd4f5b19f0 a2=28 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.705000 audit: BPF prog-id=21 op=LOAD May 13 00:50:55.705000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd4f5b1860 a2=40 a3=0 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.705000 audit: BPF prog-id=21 op=UNLOAD May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd4f5b1850 a2=50 a3=2800 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd4f5b1850 a2=50 a3=2800 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit: BPF prog-id=22 op=LOAD May 13 00:50:55.706000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd4f5b1070 a2=94 a3=2 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.706000 audit: BPF prog-id=22 op=UNLOAD May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { perfmon } for pid=3600 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit[3600]: AVC avc: denied { bpf } for pid=3600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.706000 audit: BPF prog-id=23 op=LOAD May 13 00:50:55.706000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd4f5b1170 a2=94 a3=30 items=0 ppid=3469 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit: BPF prog-id=24 op=LOAD May 13 00:50:55.709000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffbc318970 a2=98 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.709000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.709000 audit: BPF prog-id=24 op=UNLOAD May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit: BPF prog-id=25 op=LOAD May 13 00:50:55.709000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffbc318750 a2=74 a3=540051 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.709000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.709000 audit: BPF prog-id=25 op=UNLOAD May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.709000 audit: BPF prog-id=26 op=LOAD May 13 00:50:55.709000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffbc318780 a2=94 a3=2 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.709000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.709000 audit: BPF prog-id=26 op=UNLOAD May 13 00:50:55.810000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit: BPF prog-id=27 op=LOAD May 13 00:50:55.810000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffbc318640 a2=40 a3=1 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.810000 audit: BPF prog-id=27 op=UNLOAD May 13 00:50:55.810000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.810000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffbc318710 a2=50 a3=7fffbc3187f0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbc318650 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbc318680 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbc318590 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbc3186a0 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbc318680 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbc318670 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbc3186a0 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbc318680 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbc3186a0 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbc318670 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbc3186e0 a2=28 a3=0 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffbc318490 a2=50 a3=1 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit: BPF prog-id=28 op=LOAD May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffbc318490 a2=94 a3=5 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit: BPF prog-id=28 op=UNLOAD May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffbc318540 a2=50 a3=1 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffbc318660 a2=4 a3=38 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffbc3186b0 a2=94 a3=6 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffbc317e60 a2=94 a3=83 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { perfmon } for pid=3604 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.818000 audit[3604]: AVC avc: denied { confidentiality } for pid=3604 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 00:50:55.818000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffbc317e60 a2=94 a3=83 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.819000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.819000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbc3198a0 a2=10 a3=208 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.819000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.819000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.819000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbc319740 a2=10 a3=3 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.819000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.819000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.819000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbc3196e0 a2=10 a3=3 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.819000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.819000 audit[3604]: AVC avc: denied { bpf } for pid=3604 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 00:50:55.819000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbc3196e0 a2=10 a3=7 items=0 ppid=3469 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.819000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 00:50:55.828000 audit: BPF prog-id=23 op=UNLOAD May 13 00:50:55.861000 audit[3634]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3634 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:50:55.861000 audit[3634]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcf9ac3880 a2=0 a3=7ffcf9ac386c items=0 ppid=3469 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.861000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:50:55.865000 audit[3633]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=3633 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:50:55.865000 audit[3633]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff17390830 a2=0 a3=7fff1739081c items=0 ppid=3469 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.865000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:50:55.867000 audit[3636]: NETFILTER_CFG table=filter:99 family=2 entries=39 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:50:55.867000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7fffa0579520 a2=0 a3=7fffa057950c items=0 ppid=3469 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.867000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:50:55.870000 audit[3632]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=3632 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:50:55.870000 audit[3632]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc962392d0 a2=0 a3=7ffc962392bc items=0 ppid=3469 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:55.870000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:50:56.056394 env[1313]: time="2025-05-13T00:50:56.056281859Z" level=info msg="StopPodSandbox for \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\"" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.126 [INFO][3659] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.127 [INFO][3659] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" iface="eth0" netns="/var/run/netns/cni-8e4ed88b-9309-193a-d4b3-cae7bd698efd" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.127 [INFO][3659] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" iface="eth0" netns="/var/run/netns/cni-8e4ed88b-9309-193a-d4b3-cae7bd698efd" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.127 [INFO][3659] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" iface="eth0" netns="/var/run/netns/cni-8e4ed88b-9309-193a-d4b3-cae7bd698efd" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.127 [INFO][3659] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.127 [INFO][3659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.169 [INFO][3667] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" HandleID="k8s-pod-network.842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.169 [INFO][3667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.170 [INFO][3667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.239 [WARNING][3667] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" HandleID="k8s-pod-network.842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.239 [INFO][3667] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" HandleID="k8s-pod-network.842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.259 [INFO][3667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:56.263216 env[1313]: 2025-05-13 00:50:56.261 [INFO][3659] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:50:56.263717 env[1313]: time="2025-05-13T00:50:56.263348628Z" level=info msg="TearDown network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\" successfully" May 13 00:50:56.263717 env[1313]: time="2025-05-13T00:50:56.263380842Z" level=info msg="StopPodSandbox for \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\" returns successfully" May 13 00:50:56.263768 kubelet[2213]: E0513 00:50:56.263714 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:56.264376 env[1313]: time="2025-05-13T00:50:56.264323748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-95lft,Uid:ad75be90-a580-4409-ab7f-57d0bc34975e,Namespace:kube-system,Attempt:1,}" May 13 00:50:56.265635 systemd[1]: run-netns-cni\x2d8e4ed88b\x2d9309\x2d193a\x2dd4b3\x2dcae7bd698efd.mount: Deactivated successfully. May 13 00:50:56.536175 systemd-networkd[1088]: cali97c2a3a3064: Link UP May 13 00:50:56.537751 systemd-networkd[1088]: cali97c2a3a3064: Gained carrier May 13 00:50:56.538086 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali97c2a3a3064: link becomes ready May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.436 [INFO][3676] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--95lft-eth0 coredns-7db6d8ff4d- kube-system ad75be90-a580-4409-ab7f-57d0bc34975e 912 0 2025-05-13 00:50:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-95lft eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali97c2a3a3064 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Namespace="kube-system" Pod="coredns-7db6d8ff4d-95lft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--95lft-" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.436 [INFO][3676] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Namespace="kube-system" Pod="coredns-7db6d8ff4d-95lft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.462 [INFO][3693] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" HandleID="k8s-pod-network.e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.509 [INFO][3693] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" HandleID="k8s-pod-network.e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-95lft", "timestamp":"2025-05-13 00:50:56.462802628 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.509 [INFO][3693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.509 [INFO][3693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.509 [INFO][3693] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.512 [INFO][3693] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" host="localhost" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.516 [INFO][3693] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.519 [INFO][3693] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.520 [INFO][3693] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.521 [INFO][3693] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.521 [INFO][3693] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" host="localhost" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.523 [INFO][3693] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961 May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.527 [INFO][3693] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" host="localhost" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.531 [INFO][3693] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" host="localhost" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.531 [INFO][3693] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" host="localhost" May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.531 [INFO][3693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:56.555741 env[1313]: 2025-05-13 00:50:56.531 [INFO][3693] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" HandleID="k8s-pod-network.e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.556345 env[1313]: 2025-05-13 00:50:56.533 [INFO][3676] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Namespace="kube-system" Pod="coredns-7db6d8ff4d-95lft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--95lft-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ad75be90-a580-4409-ab7f-57d0bc34975e", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-95lft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97c2a3a3064", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:56.556345 env[1313]: 2025-05-13 00:50:56.534 [INFO][3676] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Namespace="kube-system" Pod="coredns-7db6d8ff4d-95lft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.556345 env[1313]: 2025-05-13 00:50:56.534 [INFO][3676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97c2a3a3064 ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Namespace="kube-system" Pod="coredns-7db6d8ff4d-95lft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.556345 env[1313]: 2025-05-13 00:50:56.538 [INFO][3676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Namespace="kube-system" Pod="coredns-7db6d8ff4d-95lft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.556345 env[1313]: 2025-05-13 00:50:56.543 [INFO][3676] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Namespace="kube-system" Pod="coredns-7db6d8ff4d-95lft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--95lft-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ad75be90-a580-4409-ab7f-57d0bc34975e", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961", Pod:"coredns-7db6d8ff4d-95lft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97c2a3a3064", MAC:"1a:d6:49:e1:44:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:56.556345 env[1313]: 2025-05-13 00:50:56.551 [INFO][3676] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961" Namespace="kube-system" Pod="coredns-7db6d8ff4d-95lft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:50:56.559000 audit[3713]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3713 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:50:56.559000 audit[3713]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffdf75f4a00 a2=0 a3=7ffdf75f49ec items=0 ppid=3469 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:56.559000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:50:56.575631 env[1313]: time="2025-05-13T00:50:56.575559397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:56.575631 env[1313]: time="2025-05-13T00:50:56.575596290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:56.575631 env[1313]: time="2025-05-13T00:50:56.575605909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:56.575836 env[1313]: time="2025-05-13T00:50:56.575754326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961 pid=3727 runtime=io.containerd.runc.v2 May 13 00:50:56.596984 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:50:56.619641 env[1313]: time="2025-05-13T00:50:56.619592391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-95lft,Uid:ad75be90-a580-4409-ab7f-57d0bc34975e,Namespace:kube-system,Attempt:1,} returns sandbox id \"e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961\"" May 13 00:50:56.620178 kubelet[2213]: E0513 00:50:56.620161 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:56.621723 env[1313]: time="2025-05-13T00:50:56.621691487Z" level=info msg="CreateContainer within sandbox \"e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:50:56.646304 env[1313]: time="2025-05-13T00:50:56.646262596Z" level=info msg="CreateContainer within sandbox \"e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bff2cbe952d74710445218fb64e86c2503c44c9aa44c531648be3e71f8f550bf\"" May 13 00:50:56.647232 env[1313]: time="2025-05-13T00:50:56.647198808Z" level=info msg="StartContainer for \"bff2cbe952d74710445218fb64e86c2503c44c9aa44c531648be3e71f8f550bf\"" May 13 00:50:56.692367 env[1313]: time="2025-05-13T00:50:56.692313675Z" level=info msg="StartContainer for \"bff2cbe952d74710445218fb64e86c2503c44c9aa44c531648be3e71f8f550bf\" returns successfully" May 13 00:50:56.742674 systemd-networkd[1088]: vxlan.calico: Gained IPv6LL May 13 00:50:57.056408 env[1313]: time="2025-05-13T00:50:57.056356164Z" level=info msg="StopPodSandbox for \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\"" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.092 [INFO][3815] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.092 [INFO][3815] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" iface="eth0" netns="/var/run/netns/cni-a42b14d7-46f9-879b-4f8a-6962197f59dd" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.092 [INFO][3815] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" iface="eth0" netns="/var/run/netns/cni-a42b14d7-46f9-879b-4f8a-6962197f59dd" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.092 [INFO][3815] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" iface="eth0" netns="/var/run/netns/cni-a42b14d7-46f9-879b-4f8a-6962197f59dd" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.092 [INFO][3815] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.092 [INFO][3815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.111 [INFO][3823] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" HandleID="k8s-pod-network.b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.111 [INFO][3823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.111 [INFO][3823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.116 [WARNING][3823] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" HandleID="k8s-pod-network.b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.116 [INFO][3823] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" HandleID="k8s-pod-network.b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.117 [INFO][3823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:57.120334 env[1313]: 2025-05-13 00:50:57.119 [INFO][3815] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:50:57.120763 env[1313]: time="2025-05-13T00:50:57.120464940Z" level=info msg="TearDown network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\" successfully" May 13 00:50:57.120763 env[1313]: time="2025-05-13T00:50:57.120493787Z" level=info msg="StopPodSandbox for \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\" returns successfully" May 13 00:50:57.121149 env[1313]: time="2025-05-13T00:50:57.121112433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857fcd798-pcg4l,Uid:575e7f4c-4b2b-4b60-8634-168da3235e29,Namespace:calico-apiserver,Attempt:1,}" May 13 00:50:57.210063 systemd-networkd[1088]: cali1ab45acee7e: Link UP May 13 00:50:57.210515 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:50:57.210551 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1ab45acee7e: link becomes ready May 13 00:50:57.210734 systemd-networkd[1088]: cali1ab45acee7e: Gained carrier May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.158 [INFO][3832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0 calico-apiserver-857fcd798- calico-apiserver 575e7f4c-4b2b-4b60-8634-168da3235e29 938 0 2025-05-13 00:50:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:857fcd798 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-857fcd798-pcg4l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1ab45acee7e [] []}} ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-pcg4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--pcg4l-" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.158 [INFO][3832] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-pcg4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.181 [INFO][3846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" HandleID="k8s-pod-network.78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.188 [INFO][3846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" HandleID="k8s-pod-network.78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-857fcd798-pcg4l", "timestamp":"2025-05-13 00:50:57.181862983 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.188 [INFO][3846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.188 [INFO][3846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.188 [INFO][3846] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.189 [INFO][3846] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" host="localhost" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.191 [INFO][3846] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.194 [INFO][3846] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.196 [INFO][3846] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.197 [INFO][3846] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.197 [INFO][3846] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" host="localhost" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.198 [INFO][3846] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.201 [INFO][3846] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" host="localhost" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.204 [INFO][3846] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" host="localhost" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.204 [INFO][3846] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" host="localhost" May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.204 [INFO][3846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:57.220044 env[1313]: 2025-05-13 00:50:57.205 [INFO][3846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" HandleID="k8s-pod-network.78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.220633 env[1313]: 2025-05-13 00:50:57.207 [INFO][3832] cni-plugin/k8s.go 386: Populated endpoint ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-pcg4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0", GenerateName:"calico-apiserver-857fcd798-", Namespace:"calico-apiserver", SelfLink:"", UID:"575e7f4c-4b2b-4b60-8634-168da3235e29", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857fcd798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-857fcd798-pcg4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ab45acee7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:57.220633 env[1313]: 2025-05-13 00:50:57.207 [INFO][3832] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-pcg4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.220633 env[1313]: 2025-05-13 00:50:57.207 [INFO][3832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ab45acee7e ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-pcg4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.220633 env[1313]: 2025-05-13 00:50:57.211 [INFO][3832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-pcg4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.220633 env[1313]: 2025-05-13 00:50:57.211 [INFO][3832] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-pcg4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0", GenerateName:"calico-apiserver-857fcd798-", Namespace:"calico-apiserver", SelfLink:"", UID:"575e7f4c-4b2b-4b60-8634-168da3235e29", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857fcd798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b", Pod:"calico-apiserver-857fcd798-pcg4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ab45acee7e", MAC:"42:57:60:57:39:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:57.220633 env[1313]: 2025-05-13 00:50:57.218 [INFO][3832] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-pcg4l" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:50:57.226630 kubelet[2213]: I0513 00:50:57.226602 2213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:50:57.227389 kubelet[2213]: E0513 00:50:57.227350 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:57.226000 audit[3868]: NETFILTER_CFG table=filter:102 family=2 entries=44 op=nft_register_chain pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:50:57.226000 audit[3868]: SYSCALL arch=c000003e syscall=46 success=yes exit=24680 a0=3 a1=7ffd1cc0b6e0 a2=0 a3=7ffd1cc0b6cc items=0 ppid=3469 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:57.226000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:50:57.234581 env[1313]: time="2025-05-13T00:50:57.234519969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:57.234581 env[1313]: time="2025-05-13T00:50:57.234556914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:57.234997 env[1313]: time="2025-05-13T00:50:57.234567214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:57.235248 env[1313]: time="2025-05-13T00:50:57.235203104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b pid=3876 runtime=io.containerd.runc.v2 May 13 00:50:57.256309 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:50:57.267833 systemd[1]: run-containerd-runc-k8s.io-e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961-runc.jcLK0j.mount: Deactivated successfully. May 13 00:50:57.267969 systemd[1]: run-netns-cni\x2da42b14d7\x2d46f9\x2d879b\x2d4f8a\x2d6962197f59dd.mount: Deactivated successfully. May 13 00:50:57.278436 env[1313]: time="2025-05-13T00:50:57.278399077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857fcd798-pcg4l,Uid:575e7f4c-4b2b-4b60-8634-168da3235e29,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b\"" May 13 00:50:57.280141 env[1313]: time="2025-05-13T00:50:57.280091937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:50:57.302501 systemd[1]: run-containerd-runc-k8s.io-0ac5ebac017aab5737b2f1f4e20c5d01215e0f5ca32c6567307551803c783f34-runc.1H3Zez.mount: Deactivated successfully. May 13 00:50:57.306034 kubelet[2213]: E0513 00:50:57.306014 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:57.329294 kubelet[2213]: I0513 00:50:57.329178 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-95lft" podStartSLOduration=34.32916139 podStartE2EDuration="34.32916139s" podCreationTimestamp="2025-05-13 00:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:50:57.3206535 +0000 UTC m=+49.336792113" watchObservedRunningTime="2025-05-13 00:50:57.32916139 +0000 UTC m=+49.345300003" May 13 00:50:57.335000 audit[3953]: NETFILTER_CFG table=filter:103 family=2 entries=16 op=nft_register_rule pid=3953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:57.335000 audit[3953]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdb018fbe0 a2=0 a3=7ffdb018fbcc items=0 ppid=2377 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:57.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:57.340000 audit[3953]: NETFILTER_CFG table=nat:104 family=2 entries=14 op=nft_register_rule pid=3953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:57.340000 audit[3953]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdb018fbe0 a2=0 a3=0 items=0 ppid=2377 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:57.340000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:57.355000 audit[3956]: NETFILTER_CFG table=filter:105 family=2 entries=13 op=nft_register_rule pid=3956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:57.355000 audit[3956]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffce99ba6f0 a2=0 a3=7ffce99ba6dc items=0 ppid=2377 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:57.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:57.359000 audit[3956]: NETFILTER_CFG table=nat:106 family=2 entries=35 op=nft_register_chain pid=3956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:57.359000 audit[3956]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffce99ba6f0 a2=0 a3=7ffce99ba6dc items=0 ppid=2377 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:57.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:58.055979 env[1313]: time="2025-05-13T00:50:58.055916877Z" level=info msg="StopPodSandbox for \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\"" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.091 [INFO][3977] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.091 [INFO][3977] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" iface="eth0" netns="/var/run/netns/cni-bd0d8ba0-185e-9334-dc2a-927ecfc43462" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.092 [INFO][3977] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" iface="eth0" netns="/var/run/netns/cni-bd0d8ba0-185e-9334-dc2a-927ecfc43462" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.092 [INFO][3977] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" iface="eth0" netns="/var/run/netns/cni-bd0d8ba0-185e-9334-dc2a-927ecfc43462" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.092 [INFO][3977] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.092 [INFO][3977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.111 [INFO][3985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" HandleID="k8s-pod-network.cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.111 [INFO][3985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.111 [INFO][3985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.117 [WARNING][3985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" HandleID="k8s-pod-network.cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.117 [INFO][3985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" HandleID="k8s-pod-network.cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.118 [INFO][3985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:58.121078 env[1313]: 2025-05-13 00:50:58.119 [INFO][3977] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:50:58.124100 env[1313]: time="2025-05-13T00:50:58.121204332Z" level=info msg="TearDown network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\" successfully" May 13 00:50:58.124100 env[1313]: time="2025-05-13T00:50:58.121235644Z" level=info msg="StopPodSandbox for \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\" returns successfully" May 13 00:50:58.124100 env[1313]: time="2025-05-13T00:50:58.122212082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8w5v4,Uid:d483d5d7-194a-4438-b970-a2e8097bf20a,Namespace:kube-system,Attempt:1,}" May 13 00:50:58.123568 systemd[1]: run-netns-cni\x2dbd0d8ba0\x2d185e\x2d9334\x2ddc2a\x2d927ecfc43462.mount: Deactivated successfully. May 13 00:50:58.124236 kubelet[2213]: E0513 00:50:58.121527 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:58.216620 systemd-networkd[1088]: cali63459748fc9: Link UP May 13 00:50:58.218819 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:50:58.218874 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali63459748fc9: link becomes ready May 13 00:50:58.219036 systemd-networkd[1088]: cali63459748fc9: Gained carrier May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.162 [INFO][3992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0 coredns-7db6d8ff4d- kube-system d483d5d7-194a-4438-b970-a2e8097bf20a 961 0 2025-05-13 00:50:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-8w5v4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63459748fc9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8w5v4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8w5v4-" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.162 [INFO][3992] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8w5v4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.185 [INFO][4007] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" HandleID="k8s-pod-network.e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.191 [INFO][4007] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" HandleID="k8s-pod-network.e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d2d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-8w5v4", "timestamp":"2025-05-13 00:50:58.185661149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.191 [INFO][4007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.191 [INFO][4007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.191 [INFO][4007] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.192 [INFO][4007] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" host="localhost" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.197 [INFO][4007] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.200 [INFO][4007] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.201 [INFO][4007] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.203 [INFO][4007] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.203 [INFO][4007] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" host="localhost" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.204 [INFO][4007] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685 May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.206 [INFO][4007] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" host="localhost" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.212 [INFO][4007] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" host="localhost" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.212 [INFO][4007] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" host="localhost" May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.212 [INFO][4007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:58.229153 env[1313]: 2025-05-13 00:50:58.212 [INFO][4007] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" HandleID="k8s-pod-network.e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.229732 env[1313]: 2025-05-13 00:50:58.214 [INFO][3992] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8w5v4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d483d5d7-194a-4438-b970-a2e8097bf20a", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-8w5v4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63459748fc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:58.229732 env[1313]: 2025-05-13 00:50:58.214 [INFO][3992] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8w5v4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.229732 env[1313]: 2025-05-13 00:50:58.215 [INFO][3992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63459748fc9 ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8w5v4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.229732 env[1313]: 2025-05-13 00:50:58.218 [INFO][3992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8w5v4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.229732 env[1313]: 2025-05-13 00:50:58.219 [INFO][3992] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8w5v4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d483d5d7-194a-4438-b970-a2e8097bf20a", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685", Pod:"coredns-7db6d8ff4d-8w5v4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63459748fc9", MAC:"ce:63:cd:30:f9:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:58.229732 env[1313]: 2025-05-13 00:50:58.226 [INFO][3992] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8w5v4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:50:58.236000 audit[4029]: NETFILTER_CFG table=filter:107 family=2 entries=34 op=nft_register_chain pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:50:58.236000 audit[4029]: SYSCALL arch=c000003e syscall=46 success=yes exit=18220 a0=3 a1=7ffd4107db40 a2=0 a3=7ffd4107db2c items=0 ppid=3469 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:58.236000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:50:58.242276 env[1313]: time="2025-05-13T00:50:58.242204869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:58.242276 env[1313]: time="2025-05-13T00:50:58.242244409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:58.242276 env[1313]: time="2025-05-13T00:50:58.242254579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:58.242614 env[1313]: time="2025-05-13T00:50:58.242562824Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685 pid=4037 runtime=io.containerd.runc.v2 May 13 00:50:58.262592 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:50:58.283095 env[1313]: time="2025-05-13T00:50:58.282365885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8w5v4,Uid:d483d5d7-194a-4438-b970-a2e8097bf20a,Namespace:kube-system,Attempt:1,} returns sandbox id \"e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685\"" May 13 00:50:58.283198 kubelet[2213]: E0513 00:50:58.283009 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:58.285174 env[1313]: time="2025-05-13T00:50:58.285142715Z" level=info msg="CreateContainer within sandbox \"e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:50:58.302407 env[1313]: time="2025-05-13T00:50:58.302282200Z" level=info msg="CreateContainer within sandbox \"e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23a6459119783a7523edc4aed0575c15555bfe6140f60903bb3f9098d48ce2a7\"" May 13 00:50:58.302815 env[1313]: time="2025-05-13T00:50:58.302783480Z" level=info msg="StartContainer for \"23a6459119783a7523edc4aed0575c15555bfe6140f60903bb3f9098d48ce2a7\"" May 13 00:50:58.314333 kubelet[2213]: E0513 00:50:58.312908 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:58.343031 env[1313]: time="2025-05-13T00:50:58.342984716Z" level=info msg="StartContainer for \"23a6459119783a7523edc4aed0575c15555bfe6140f60903bb3f9098d48ce2a7\" returns successfully" May 13 00:50:58.405393 systemd-networkd[1088]: cali1ab45acee7e: Gained IPv6LL May 13 00:50:58.597068 systemd-networkd[1088]: cali97c2a3a3064: Gained IPv6LL May 13 00:50:59.056578 env[1313]: time="2025-05-13T00:50:59.056547474Z" level=info msg="StopPodSandbox for \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\"" May 13 00:50:59.056911 env[1313]: time="2025-05-13T00:50:59.056598987Z" level=info msg="StopPodSandbox for \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\"" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.102 [INFO][4143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.102 [INFO][4143] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" iface="eth0" netns="/var/run/netns/cni-fcd3bb65-bc70-6c7d-80c9-2a84603ae0dd" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.102 [INFO][4143] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" iface="eth0" netns="/var/run/netns/cni-fcd3bb65-bc70-6c7d-80c9-2a84603ae0dd" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.103 [INFO][4143] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" iface="eth0" netns="/var/run/netns/cni-fcd3bb65-bc70-6c7d-80c9-2a84603ae0dd" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.103 [INFO][4143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.103 [INFO][4143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.127 [INFO][4158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" HandleID="k8s-pod-network.69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.127 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.127 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.131 [WARNING][4158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" HandleID="k8s-pod-network.69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.131 [INFO][4158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" HandleID="k8s-pod-network.69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.132 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:59.135687 env[1313]: 2025-05-13 00:50:59.134 [INFO][4143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:50:59.137900 env[1313]: time="2025-05-13T00:50:59.135825309Z" level=info msg="TearDown network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\" successfully" May 13 00:50:59.137900 env[1313]: time="2025-05-13T00:50:59.135856130Z" level=info msg="StopPodSandbox for \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\" returns successfully" May 13 00:50:59.137900 env[1313]: time="2025-05-13T00:50:59.137080832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f548f5c9b-2tczf,Uid:717e1a73-0b5d-4ee5-9bae-65be581845ed,Namespace:calico-system,Attempt:1,}" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.137 [INFO][4142] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.137 [INFO][4142] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" iface="eth0" netns="/var/run/netns/cni-9fba4bf4-12bd-abfb-c018-2b5878b3cc30" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.137 [INFO][4142] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" iface="eth0" netns="/var/run/netns/cni-9fba4bf4-12bd-abfb-c018-2b5878b3cc30" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.138 [INFO][4142] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" iface="eth0" netns="/var/run/netns/cni-9fba4bf4-12bd-abfb-c018-2b5878b3cc30" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.138 [INFO][4142] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.138 [INFO][4142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.172 [INFO][4167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" HandleID="k8s-pod-network.fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.173 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.173 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.177 [WARNING][4167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" HandleID="k8s-pod-network.fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.177 [INFO][4167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" HandleID="k8s-pod-network.fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.178 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:59.180487 env[1313]: 2025-05-13 00:50:59.179 [INFO][4142] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:50:59.180889 env[1313]: time="2025-05-13T00:50:59.180611621Z" level=info msg="TearDown network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\" successfully" May 13 00:50:59.180889 env[1313]: time="2025-05-13T00:50:59.180642863Z" level=info msg="StopPodSandbox for \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\" returns successfully" May 13 00:50:59.181306 env[1313]: time="2025-05-13T00:50:59.181270364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857fcd798-fpxlt,Uid:16ab6220-b9fb-42eb-b90d-d41f68bb7889,Namespace:calico-apiserver,Attempt:1,}" May 13 00:50:59.256178 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:50:59.256277 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5f487c2cc06: link becomes ready May 13 00:50:59.257071 systemd-networkd[1088]: cali5f487c2cc06: Link UP May 13 00:50:59.257272 systemd-networkd[1088]: cali5f487c2cc06: Gained carrier May 13 00:50:59.267852 systemd[1]: run-containerd-runc-k8s.io-23a6459119783a7523edc4aed0575c15555bfe6140f60903bb3f9098d48ce2a7-runc.DBeq6l.mount: Deactivated successfully. May 13 00:50:59.267990 systemd[1]: run-netns-cni\x2d9fba4bf4\x2d12bd\x2dabfb\x2dc018\x2d2b5878b3cc30.mount: Deactivated successfully. May 13 00:50:59.268065 systemd[1]: run-netns-cni\x2dfcd3bb65\x2dbc70\x2d6c7d\x2d80c9\x2d2a84603ae0dd.mount: Deactivated successfully. May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.184 [INFO][4174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0 calico-kube-controllers-f548f5c9b- calico-system 717e1a73-0b5d-4ee5-9bae-65be581845ed 975 0 2025-05-13 00:50:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f548f5c9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f548f5c9b-2tczf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5f487c2cc06 [] []}} ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Namespace="calico-system" Pod="calico-kube-controllers-f548f5c9b-2tczf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.184 [INFO][4174] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Namespace="calico-system" Pod="calico-kube-controllers-f548f5c9b-2tczf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.216 [INFO][4189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" HandleID="k8s-pod-network.14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.225 [INFO][4189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" HandleID="k8s-pod-network.14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000127a00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f548f5c9b-2tczf", "timestamp":"2025-05-13 00:50:59.216467673 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.225 [INFO][4189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.225 [INFO][4189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.225 [INFO][4189] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.226 [INFO][4189] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" host="localhost" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.229 [INFO][4189] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.234 [INFO][4189] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.236 [INFO][4189] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.237 [INFO][4189] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.237 [INFO][4189] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" host="localhost" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.238 [INFO][4189] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76 May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.242 [INFO][4189] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" host="localhost" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.246 [INFO][4189] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" host="localhost" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.246 [INFO][4189] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" host="localhost" May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.246 [INFO][4189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:59.274547 env[1313]: 2025-05-13 00:50:59.246 [INFO][4189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" HandleID="k8s-pod-network.14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.275770 env[1313]: 2025-05-13 00:50:59.250 [INFO][4174] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Namespace="calico-system" Pod="calico-kube-controllers-f548f5c9b-2tczf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0", GenerateName:"calico-kube-controllers-f548f5c9b-", Namespace:"calico-system", SelfLink:"", UID:"717e1a73-0b5d-4ee5-9bae-65be581845ed", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f548f5c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f548f5c9b-2tczf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f487c2cc06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:59.275770 env[1313]: 2025-05-13 00:50:59.251 [INFO][4174] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Namespace="calico-system" Pod="calico-kube-controllers-f548f5c9b-2tczf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.275770 env[1313]: 2025-05-13 00:50:59.251 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f487c2cc06 ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Namespace="calico-system" Pod="calico-kube-controllers-f548f5c9b-2tczf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.275770 env[1313]: 2025-05-13 00:50:59.256 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Namespace="calico-system" Pod="calico-kube-controllers-f548f5c9b-2tczf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.275770 env[1313]: 2025-05-13 00:50:59.256 [INFO][4174] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Namespace="calico-system" Pod="calico-kube-controllers-f548f5c9b-2tczf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0", GenerateName:"calico-kube-controllers-f548f5c9b-", Namespace:"calico-system", SelfLink:"", UID:"717e1a73-0b5d-4ee5-9bae-65be581845ed", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f548f5c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76", Pod:"calico-kube-controllers-f548f5c9b-2tczf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f487c2cc06", MAC:"42:a6:ed:8e:54:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:59.275770 env[1313]: 2025-05-13 00:50:59.272 [INFO][4174] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76" Namespace="calico-system" Pod="calico-kube-controllers-f548f5c9b-2tczf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:50:59.289618 systemd-networkd[1088]: calie554cc45d17: Link UP May 13 00:50:59.290003 env[1313]: time="2025-05-13T00:50:59.289864095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:59.290071 env[1313]: time="2025-05-13T00:50:59.290018212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:59.290101 env[1313]: time="2025-05-13T00:50:59.290073712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:59.292364 env[1313]: time="2025-05-13T00:50:59.290356787Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76 pid=4242 runtime=io.containerd.runc.v2 May 13 00:50:59.296000 audit[4254]: NETFILTER_CFG table=filter:108 family=2 entries=46 op=nft_register_chain pid=4254 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:50:59.296000 audit[4254]: SYSCALL arch=c000003e syscall=46 success=yes exit=22712 a0=3 a1=7ffc14937b70 a2=0 a3=7ffc14937b5c items=0 ppid=3469 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:59.296000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:50:59.299721 systemd-networkd[1088]: calie554cc45d17: Gained carrier May 13 00:50:59.299964 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie554cc45d17: link becomes ready May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.218 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0 calico-apiserver-857fcd798- calico-apiserver 16ab6220-b9fb-42eb-b90d-d41f68bb7889 977 0 2025-05-13 00:50:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:857fcd798 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-857fcd798-fpxlt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie554cc45d17 [] []}} ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-fpxlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--fpxlt-" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.219 [INFO][4194] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-fpxlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.252 [INFO][4213] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" HandleID="k8s-pod-network.bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.265 [INFO][4213] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" HandleID="k8s-pod-network.bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-857fcd798-fpxlt", "timestamp":"2025-05-13 00:50:59.252124188 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.265 [INFO][4213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.265 [INFO][4213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.265 [INFO][4213] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.266 [INFO][4213] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" host="localhost" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.269 [INFO][4213] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.272 [INFO][4213] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.273 [INFO][4213] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.277 [INFO][4213] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.277 [INFO][4213] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" host="localhost" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.278 [INFO][4213] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0 May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.281 [INFO][4213] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" host="localhost" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.285 [INFO][4213] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" host="localhost" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.285 [INFO][4213] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" host="localhost" May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.286 [INFO][4213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:50:59.312008 env[1313]: 2025-05-13 00:50:59.286 [INFO][4213] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" HandleID="k8s-pod-network.bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.312660 env[1313]: 2025-05-13 00:50:59.287 [INFO][4194] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-fpxlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0", GenerateName:"calico-apiserver-857fcd798-", Namespace:"calico-apiserver", SelfLink:"", UID:"16ab6220-b9fb-42eb-b90d-d41f68bb7889", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857fcd798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-857fcd798-fpxlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie554cc45d17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:59.312660 env[1313]: 2025-05-13 00:50:59.288 [INFO][4194] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-fpxlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.312660 env[1313]: 2025-05-13 00:50:59.288 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie554cc45d17 ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-fpxlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.312660 env[1313]: 2025-05-13 00:50:59.300 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-fpxlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.312660 env[1313]: 2025-05-13 00:50:59.300 [INFO][4194] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-fpxlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0", GenerateName:"calico-apiserver-857fcd798-", Namespace:"calico-apiserver", SelfLink:"", UID:"16ab6220-b9fb-42eb-b90d-d41f68bb7889", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857fcd798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0", Pod:"calico-apiserver-857fcd798-fpxlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie554cc45d17", MAC:"86:1a:ca:6b:e8:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:50:59.312660 env[1313]: 2025-05-13 00:50:59.307 [INFO][4194] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0" Namespace="calico-apiserver" Pod="calico-apiserver-857fcd798-fpxlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:50:59.318983 kubelet[2213]: E0513 00:50:59.317315 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:59.319475 kubelet[2213]: E0513 00:50:59.319462 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:59.321000 audit[4270]: NETFILTER_CFG table=filter:109 family=2 entries=46 op=nft_register_chain pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:50:59.321000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7fffeac76690 a2=0 a3=7fffeac7667c items=0 ppid=3469 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:59.321000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:50:59.334827 kubelet[2213]: I0513 00:50:59.334757 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8w5v4" podStartSLOduration=36.334738963 podStartE2EDuration="36.334738963s" podCreationTimestamp="2025-05-13 00:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:50:59.333713048 +0000 UTC m=+51.349851651" watchObservedRunningTime="2025-05-13 00:50:59.334738963 +0000 UTC m=+51.350877576" May 13 00:50:59.356691 env[1313]: time="2025-05-13T00:50:59.355936115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:59.356691 env[1313]: time="2025-05-13T00:50:59.356054501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:59.356691 env[1313]: time="2025-05-13T00:50:59.356075724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:59.356691 env[1313]: time="2025-05-13T00:50:59.356235322Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0 pid=4284 runtime=io.containerd.runc.v2 May 13 00:50:59.363350 systemd[1]: run-containerd-runc-k8s.io-14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76-runc.4hom9Q.mount: Deactivated successfully. May 13 00:50:59.369000 audit[4297]: NETFILTER_CFG table=filter:110 family=2 entries=10 op=nft_register_rule pid=4297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:59.369000 audit[4297]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffaea27220 a2=0 a3=7fffaea2720c items=0 ppid=2377 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:59.369000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:59.374000 audit[4297]: NETFILTER_CFG table=nat:111 family=2 entries=44 op=nft_register_rule pid=4297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:59.374000 audit[4297]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffaea27220 a2=0 a3=7fffaea2720c items=0 ppid=2377 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:59.374000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:59.389000 audit[4316]: NETFILTER_CFG table=filter:112 family=2 entries=10 op=nft_register_rule pid=4316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:59.389000 audit[4316]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffa75e3180 a2=0 a3=7fffa75e316c items=0 ppid=2377 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:59.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:59.398000 audit[4316]: NETFILTER_CFG table=nat:113 family=2 entries=56 op=nft_register_chain pid=4316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:50:59.398000 audit[4316]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffa75e3180 a2=0 a3=7fffa75e316c items=0 ppid=2377 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:50:59.398000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:50:59.411390 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:50:59.413856 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:50:59.441685 env[1313]: time="2025-05-13T00:50:59.441633827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f548f5c9b-2tczf,Uid:717e1a73-0b5d-4ee5-9bae-65be581845ed,Namespace:calico-system,Attempt:1,} returns sandbox id \"14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76\"" May 13 00:50:59.455722 env[1313]: time="2025-05-13T00:50:59.455683683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857fcd798-fpxlt,Uid:16ab6220-b9fb-42eb-b90d-d41f68bb7889,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0\"" May 13 00:50:59.749123 systemd-networkd[1088]: cali63459748fc9: Gained IPv6LL May 13 00:51:00.059176 kernel: kauditd_printk_skb: 555 callbacks suppressed May 13 00:51:00.059245 kernel: audit: type=1130 audit(1747097460.056:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.140:22-10.0.0.1:52700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:00.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.140:22-10.0.0.1:52700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:00.057377 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:52700.service. May 13 00:51:00.059453 env[1313]: time="2025-05-13T00:51:00.055835765Z" level=info msg="StopPodSandbox for \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\"" May 13 00:51:00.098000 audit[4344]: USER_ACCT pid=4344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.099611 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 52700 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:00.108187 kernel: audit: type=1101 audit(1747097460.098:463): pid=4344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.108295 kernel: audit: type=1103 audit(1747097460.103:464): pid=4344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.103000 audit[4344]: CRED_ACQ pid=4344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.104359 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:00.108591 env[1313]: time="2025-05-13T00:51:00.105266071Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:00.108591 env[1313]: time="2025-05-13T00:51:00.107647884Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:00.110120 env[1313]: time="2025-05-13T00:51:00.110101130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:00.111485 kernel: audit: type=1006 audit(1747097460.103:465): pid=4344 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 13 00:51:00.103000 audit[4344]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca33eb670 a2=3 a3=0 items=0 ppid=1 pid=4344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:00.112930 systemd[1]: Started session-15.scope. May 13 00:51:00.113057 env[1313]: time="2025-05-13T00:51:00.112992639Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:00.113524 env[1313]: time="2025-05-13T00:51:00.113489217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:51:00.113657 systemd-logind[1296]: New session 15 of user core. May 13 00:51:00.115493 env[1313]: time="2025-05-13T00:51:00.115041730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 00:51:00.115720 env[1313]: time="2025-05-13T00:51:00.115697586Z" level=info msg="CreateContainer within sandbox \"78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:51:00.116081 kernel: audit: type=1300 audit(1747097460.103:465): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca33eb670 a2=3 a3=0 items=0 ppid=1 pid=4344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:00.103000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:00.118184 kernel: audit: type=1327 audit(1747097460.103:465): proctitle=737368643A20636F7265205B707269765D May 13 00:51:00.120000 audit[4344]: USER_START pid=4344 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.121000 audit[4373]: CRED_ACQ pid=4373 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.131568 kernel: audit: type=1105 audit(1747097460.120:466): pid=4344 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.131634 kernel: audit: type=1103 audit(1747097460.121:467): pid=4373 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.131701 env[1313]: time="2025-05-13T00:51:00.131657442Z" level=info msg="CreateContainer within sandbox \"78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e12e8f1737a394029b45e86c3093b29063ca0c5d68e9d601edc6de3802129fe\"" May 13 00:51:00.133225 env[1313]: time="2025-05-13T00:51:00.132245873Z" level=info msg="StartContainer for \"3e12e8f1737a394029b45e86c3093b29063ca0c5d68e9d601edc6de3802129fe\"" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.103 [INFO][4357] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.103 [INFO][4357] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" iface="eth0" netns="/var/run/netns/cni-6668cc20-a5bb-f068-2400-6dec1b54e5a9" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.103 [INFO][4357] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" iface="eth0" netns="/var/run/netns/cni-6668cc20-a5bb-f068-2400-6dec1b54e5a9" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.103 [INFO][4357] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" iface="eth0" netns="/var/run/netns/cni-6668cc20-a5bb-f068-2400-6dec1b54e5a9" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.103 [INFO][4357] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.103 [INFO][4357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.140 [INFO][4367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" HandleID="k8s-pod-network.5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.140 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.140 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.145 [WARNING][4367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" HandleID="k8s-pod-network.5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.145 [INFO][4367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" HandleID="k8s-pod-network.5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.147 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:00.152459 env[1313]: 2025-05-13 00:51:00.150 [INFO][4357] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:00.153064 env[1313]: time="2025-05-13T00:51:00.152586735Z" level=info msg="TearDown network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\" successfully" May 13 00:51:00.153064 env[1313]: time="2025-05-13T00:51:00.152614431Z" level=info msg="StopPodSandbox for \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\" returns successfully" May 13 00:51:00.153134 env[1313]: time="2025-05-13T00:51:00.153100087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbllw,Uid:8e52f16f-64af-4b4e-a240-a749e7055c20,Namespace:calico-system,Attempt:1,}" May 13 00:51:00.268273 systemd[1]: run-netns-cni\x2d6668cc20\x2da5bb\x2df068\x2d2400\x2d6dec1b54e5a9.mount: Deactivated successfully. May 13 00:51:00.565534 sshd[4344]: pam_unix(sshd:session): session closed for user core May 13 00:51:00.565000 audit[4344]: USER_END pid=4344 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.567583 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:52700.service: Deactivated successfully. May 13 00:51:00.568672 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:51:00.568929 systemd-logind[1296]: Session 15 logged out. Waiting for processes to exit. May 13 00:51:00.569692 systemd-logind[1296]: Removed session 15. May 13 00:51:00.565000 audit[4344]: CRED_DISP pid=4344 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.574273 kernel: audit: type=1106 audit(1747097460.565:468): pid=4344 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.574328 kernel: audit: type=1104 audit(1747097460.565:469): pid=4344 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:00.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.140:22-10.0.0.1:52700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:00.620048 env[1313]: time="2025-05-13T00:51:00.619998440Z" level=info msg="StartContainer for \"3e12e8f1737a394029b45e86c3093b29063ca0c5d68e9d601edc6de3802129fe\" returns successfully" May 13 00:51:00.625853 kubelet[2213]: E0513 00:51:00.625813 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:51:00.638699 systemd-networkd[1088]: cali3e49f676df0: Link UP May 13 00:51:00.641347 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:51:00.641394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3e49f676df0: link becomes ready May 13 00:51:00.641552 systemd-networkd[1088]: cali3e49f676df0: Gained carrier May 13 00:51:00.646133 systemd-networkd[1088]: calie554cc45d17: Gained IPv6LL May 13 00:51:00.650000 audit[4452]: NETFILTER_CFG table=filter:114 family=2 entries=10 op=nft_register_rule pid=4452 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:00.650000 audit[4452]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe610eff20 a2=0 a3=7ffe610eff0c items=0 ppid=2377 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:00.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:00.654488 kubelet[2213]: I0513 00:51:00.654431 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-857fcd798-pcg4l" podStartSLOduration=28.819582242 podStartE2EDuration="31.654412379s" podCreationTimestamp="2025-05-13 00:50:29 +0000 UTC" firstStartedPulling="2025-05-13 00:50:57.279712399 +0000 UTC m=+49.295851012" lastFinishedPulling="2025-05-13 00:51:00.114542516 +0000 UTC m=+52.130681149" observedRunningTime="2025-05-13 00:51:00.635383461 +0000 UTC m=+52.651522064" watchObservedRunningTime="2025-05-13 00:51:00.654412379 +0000 UTC m=+52.670550992" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.231 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dbllw-eth0 csi-node-driver- calico-system 8e52f16f-64af-4b4e-a240-a749e7055c20 998 0 2025-05-13 00:50:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dbllw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3e49f676df0 [] []}} ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Namespace="calico-system" Pod="csi-node-driver-dbllw" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbllw-" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.231 [INFO][4403] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Namespace="calico-system" Pod="csi-node-driver-dbllw" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.546 [INFO][4435] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" HandleID="k8s-pod-network.2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.563 [INFO][4435] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" HandleID="k8s-pod-network.2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dbllw", "timestamp":"2025-05-13 00:51:00.546253338 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.563 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.563 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.563 [INFO][4435] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.565 [INFO][4435] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" host="localhost" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.595 [INFO][4435] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.600 [INFO][4435] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.601 [INFO][4435] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.619 [INFO][4435] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.619 [INFO][4435] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" host="localhost" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.622 [INFO][4435] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77 May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.627 [INFO][4435] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" host="localhost" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.633 [INFO][4435] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" host="localhost" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.633 [INFO][4435] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" host="localhost" May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.633 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:00.656336 env[1313]: 2025-05-13 00:51:00.633 [INFO][4435] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" HandleID="k8s-pod-network.2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.656992 env[1313]: 2025-05-13 00:51:00.636 [INFO][4403] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Namespace="calico-system" Pod="csi-node-driver-dbllw" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbllw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dbllw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e52f16f-64af-4b4e-a240-a749e7055c20", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dbllw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e49f676df0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:00.656992 env[1313]: 2025-05-13 00:51:00.636 [INFO][4403] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Namespace="calico-system" Pod="csi-node-driver-dbllw" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.656992 env[1313]: 2025-05-13 00:51:00.636 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e49f676df0 ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Namespace="calico-system" Pod="csi-node-driver-dbllw" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.656992 env[1313]: 2025-05-13 00:51:00.641 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Namespace="calico-system" Pod="csi-node-driver-dbllw" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.656992 env[1313]: 2025-05-13 00:51:00.642 [INFO][4403] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Namespace="calico-system" Pod="csi-node-driver-dbllw" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbllw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dbllw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e52f16f-64af-4b4e-a240-a749e7055c20", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77", Pod:"csi-node-driver-dbllw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e49f676df0", MAC:"ae:8f:1d:7d:79:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:00.656992 env[1313]: 2025-05-13 00:51:00.654 [INFO][4403] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77" Namespace="calico-system" Pod="csi-node-driver-dbllw" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:00.656000 audit[4452]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=4452 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:00.656000 audit[4452]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe610eff20 a2=0 a3=7ffe610eff0c items=0 ppid=2377 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:00.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:00.668000 audit[4467]: NETFILTER_CFG table=filter:116 family=2 entries=50 op=nft_register_chain pid=4467 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 00:51:00.668000 audit[4467]: SYSCALL arch=c000003e syscall=46 success=yes exit=23392 a0=3 a1=7ffee2d3e440 a2=0 a3=7ffee2d3e42c items=0 ppid=3469 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:00.669840 env[1313]: time="2025-05-13T00:51:00.669707671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:51:00.669840 env[1313]: time="2025-05-13T00:51:00.669758011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:51:00.669840 env[1313]: time="2025-05-13T00:51:00.669777139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:51:00.669965 env[1313]: time="2025-05-13T00:51:00.669902329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77 pid=4474 runtime=io.containerd.runc.v2 May 13 00:51:00.668000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 00:51:00.687210 systemd[1]: run-containerd-runc-k8s.io-2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77-runc.faZzkC.mount: Deactivated successfully. May 13 00:51:00.698217 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:51:00.710167 env[1313]: time="2025-05-13T00:51:00.710131820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbllw,Uid:8e52f16f-64af-4b4e-a240-a749e7055c20,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77\"" May 13 00:51:01.093089 systemd-networkd[1088]: cali5f487c2cc06: Gained IPv6LL May 13 00:51:01.629434 kubelet[2213]: E0513 00:51:01.629394 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:51:01.666000 audit[4515]: NETFILTER_CFG table=filter:117 family=2 entries=9 op=nft_register_rule pid=4515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:01.666000 audit[4515]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff76fac380 a2=0 a3=7fff76fac36c items=0 ppid=2377 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:01.666000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:01.674000 audit[4515]: NETFILTER_CFG table=nat:118 family=2 entries=27 op=nft_register_chain pid=4515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:01.674000 audit[4515]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff76fac380 a2=0 a3=7fff76fac36c items=0 ppid=2377 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:01.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:02.247136 systemd-networkd[1088]: cali3e49f676df0: Gained IPv6LL May 13 00:51:02.726145 env[1313]: time="2025-05-13T00:51:02.726108912Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:02.728217 env[1313]: time="2025-05-13T00:51:02.728191975Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:02.729861 env[1313]: time="2025-05-13T00:51:02.729825664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:02.731355 env[1313]: time="2025-05-13T00:51:02.731325045Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:02.731767 env[1313]: time="2025-05-13T00:51:02.731736414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 00:51:02.732613 env[1313]: time="2025-05-13T00:51:02.732589730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:51:02.740245 env[1313]: time="2025-05-13T00:51:02.740207408Z" level=info msg="CreateContainer within sandbox \"14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 00:51:02.752515 env[1313]: time="2025-05-13T00:51:02.752484341Z" level=info msg="CreateContainer within sandbox \"14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"05b882c503634fce5c746905a94f0f5ed6ee39fb7888a89dce1cd6f082b0bc38\"" May 13 00:51:02.752894 env[1313]: time="2025-05-13T00:51:02.752867584Z" level=info msg="StartContainer for \"05b882c503634fce5c746905a94f0f5ed6ee39fb7888a89dce1cd6f082b0bc38\"" May 13 00:51:02.880728 env[1313]: time="2025-05-13T00:51:02.880666468Z" level=info msg="StartContainer for \"05b882c503634fce5c746905a94f0f5ed6ee39fb7888a89dce1cd6f082b0bc38\" returns successfully" May 13 00:51:03.141100 env[1313]: time="2025-05-13T00:51:03.140997487Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:03.142856 env[1313]: time="2025-05-13T00:51:03.142824608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:03.144365 env[1313]: time="2025-05-13T00:51:03.144328496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:03.145766 env[1313]: time="2025-05-13T00:51:03.145731975Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:03.146184 env[1313]: time="2025-05-13T00:51:03.146155487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:51:03.147128 env[1313]: time="2025-05-13T00:51:03.147105294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:51:03.147987 env[1313]: time="2025-05-13T00:51:03.147958720Z" level=info msg="CreateContainer within sandbox \"bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:51:03.160189 env[1313]: time="2025-05-13T00:51:03.160153353Z" level=info msg="CreateContainer within sandbox \"bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"59022b4f2e7766e32c26ed36bcefdf7f1e49a35af3a7868b76a742c682545571\"" May 13 00:51:03.160570 env[1313]: time="2025-05-13T00:51:03.160537587Z" level=info msg="StartContainer for \"59022b4f2e7766e32c26ed36bcefdf7f1e49a35af3a7868b76a742c682545571\"" May 13 00:51:03.207252 env[1313]: time="2025-05-13T00:51:03.207205503Z" level=info msg="StartContainer for \"59022b4f2e7766e32c26ed36bcefdf7f1e49a35af3a7868b76a742c682545571\" returns successfully" May 13 00:51:03.646503 kubelet[2213]: I0513 00:51:03.646450 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-857fcd798-fpxlt" podStartSLOduration=30.954722887 podStartE2EDuration="34.644320023s" podCreationTimestamp="2025-05-13 00:50:29 +0000 UTC" firstStartedPulling="2025-05-13 00:50:59.457329303 +0000 UTC m=+51.473467906" lastFinishedPulling="2025-05-13 00:51:03.146926429 +0000 UTC m=+55.163065042" observedRunningTime="2025-05-13 00:51:03.64385075 +0000 UTC m=+55.659989383" watchObservedRunningTime="2025-05-13 00:51:03.644320023 +0000 UTC m=+55.660458656" May 13 00:51:03.653000 audit[4599]: NETFILTER_CFG table=filter:119 family=2 entries=8 op=nft_register_rule pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:03.653000 audit[4599]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffecec9d7c0 a2=0 a3=7ffecec9d7ac items=0 ppid=2377 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:03.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:03.664100 kubelet[2213]: I0513 00:51:03.664052 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f548f5c9b-2tczf" podStartSLOduration=31.375714819 podStartE2EDuration="34.664036031s" podCreationTimestamp="2025-05-13 00:50:29 +0000 UTC" firstStartedPulling="2025-05-13 00:50:59.44411845 +0000 UTC m=+51.460257063" lastFinishedPulling="2025-05-13 00:51:02.732439662 +0000 UTC m=+54.748578275" observedRunningTime="2025-05-13 00:51:03.656501439 +0000 UTC m=+55.672640052" watchObservedRunningTime="2025-05-13 00:51:03.664036031 +0000 UTC m=+55.680174644" May 13 00:51:03.664000 audit[4599]: NETFILTER_CFG table=nat:120 family=2 entries=30 op=nft_register_rule pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:03.664000 audit[4599]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffecec9d7c0 a2=0 a3=7ffecec9d7ac items=0 ppid=2377 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:03.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:04.557715 env[1313]: time="2025-05-13T00:51:04.557612832Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:04.559463 env[1313]: time="2025-05-13T00:51:04.559384540Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:04.560771 env[1313]: time="2025-05-13T00:51:04.560741415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:04.562123 env[1313]: time="2025-05-13T00:51:04.562093720Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:04.562492 env[1313]: time="2025-05-13T00:51:04.562457322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 00:51:04.564348 env[1313]: time="2025-05-13T00:51:04.564306976Z" level=info msg="CreateContainer within sandbox \"2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:51:04.576284 env[1313]: time="2025-05-13T00:51:04.576225875Z" level=info msg="CreateContainer within sandbox \"2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e58c8d613efbea45c47bc37b65363f89f02b5784f6daec4af826c81372d2c2e8\"" May 13 00:51:04.576724 env[1313]: time="2025-05-13T00:51:04.576703023Z" level=info msg="StartContainer for \"e58c8d613efbea45c47bc37b65363f89f02b5784f6daec4af826c81372d2c2e8\"" May 13 00:51:04.621647 env[1313]: time="2025-05-13T00:51:04.621605443Z" level=info msg="StartContainer for \"e58c8d613efbea45c47bc37b65363f89f02b5784f6daec4af826c81372d2c2e8\" returns successfully" May 13 00:51:04.624088 env[1313]: time="2025-05-13T00:51:04.624056079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:51:04.638832 kubelet[2213]: I0513 00:51:04.638802 2213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:51:05.568925 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:47858.service. May 13 00:51:05.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.140:22-10.0.0.1:47858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:05.779718 kernel: kauditd_printk_skb: 22 callbacks suppressed May 13 00:51:05.779834 kernel: audit: type=1130 audit(1747097465.568:478): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.140:22-10.0.0.1:47858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:05.826000 audit[4645]: USER_ACCT pid=4645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.827672 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 47858 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:05.831000 audit[4645]: CRED_ACQ pid=4645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.831979 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:05.835230 kernel: audit: type=1101 audit(1747097465.826:479): pid=4645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.835283 kernel: audit: type=1103 audit(1747097465.831:480): pid=4645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.835302 kernel: audit: type=1006 audit(1747097465.831:481): pid=4645 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 13 00:51:05.836111 systemd-logind[1296]: New session 16 of user core. May 13 00:51:05.836204 systemd[1]: Started session-16.scope. May 13 00:51:05.831000 audit[4645]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf40993b0 a2=3 a3=0 items=0 ppid=1 pid=4645 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:05.841995 kernel: audit: type=1300 audit(1747097465.831:481): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf40993b0 a2=3 a3=0 items=0 ppid=1 pid=4645 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:05.842269 kernel: audit: type=1327 audit(1747097465.831:481): proctitle=737368643A20636F7265205B707269765D May 13 00:51:05.831000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:05.843334 kernel: audit: type=1105 audit(1747097465.841:482): pid=4645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.841000 audit[4645]: USER_START pid=4645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.843000 audit[4650]: CRED_ACQ pid=4650 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.850845 kernel: audit: type=1103 audit(1747097465.843:483): pid=4650 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.945694 sshd[4645]: pam_unix(sshd:session): session closed for user core May 13 00:51:05.945000 audit[4645]: USER_END pid=4645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.948092 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:47858.service: Deactivated successfully. May 13 00:51:05.948858 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:51:05.945000 audit[4645]: CRED_DISP pid=4645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.951645 systemd-logind[1296]: Session 16 logged out. Waiting for processes to exit. May 13 00:51:05.952285 systemd-logind[1296]: Removed session 16. May 13 00:51:05.954597 kernel: audit: type=1106 audit(1747097465.945:484): pid=4645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.954654 kernel: audit: type=1104 audit(1747097465.945:485): pid=4645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:05.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.140:22-10.0.0.1:47858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:06.600175 env[1313]: time="2025-05-13T00:51:06.600118596Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:06.601997 env[1313]: time="2025-05-13T00:51:06.601937193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:06.603276 env[1313]: time="2025-05-13T00:51:06.603237112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:06.604587 env[1313]: time="2025-05-13T00:51:06.604555046Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:51:06.604925 env[1313]: time="2025-05-13T00:51:06.604898387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 00:51:06.606819 env[1313]: time="2025-05-13T00:51:06.606791461Z" level=info msg="CreateContainer within sandbox \"2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:51:06.619674 env[1313]: time="2025-05-13T00:51:06.619619697Z" level=info msg="CreateContainer within sandbox \"2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b806701c84a22266a495dcd7a33f1bc5394c2d91e4f804d9574c893623f677db\"" May 13 00:51:06.620133 env[1313]: time="2025-05-13T00:51:06.620102164Z" level=info msg="StartContainer for \"b806701c84a22266a495dcd7a33f1bc5394c2d91e4f804d9574c893623f677db\"" May 13 00:51:06.662155 env[1313]: time="2025-05-13T00:51:06.662110176Z" level=info msg="StartContainer for \"b806701c84a22266a495dcd7a33f1bc5394c2d91e4f804d9574c893623f677db\" returns successfully" May 13 00:51:07.134346 kubelet[2213]: I0513 00:51:07.134309 2213 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:51:07.134346 kubelet[2213]: I0513 00:51:07.134344 2213 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:51:08.039155 env[1313]: time="2025-05-13T00:51:08.039113168Z" level=info msg="StopPodSandbox for \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\"" May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.071 [WARNING][4719] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0", GenerateName:"calico-apiserver-857fcd798-", Namespace:"calico-apiserver", SelfLink:"", UID:"16ab6220-b9fb-42eb-b90d-d41f68bb7889", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857fcd798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0", Pod:"calico-apiserver-857fcd798-fpxlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie554cc45d17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.071 [INFO][4719] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.071 [INFO][4719] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" iface="eth0" netns="" May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.071 [INFO][4719] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.071 [INFO][4719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.093 [INFO][4730] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" HandleID="k8s-pod-network.fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.093 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.093 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.097 [WARNING][4730] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" HandleID="k8s-pod-network.fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.097 [INFO][4730] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" HandleID="k8s-pod-network.fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.098 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.101740 env[1313]: 2025-05-13 00:51:08.100 [INFO][4719] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:51:08.102340 env[1313]: time="2025-05-13T00:51:08.101769146Z" level=info msg="TearDown network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\" successfully" May 13 00:51:08.102340 env[1313]: time="2025-05-13T00:51:08.101802282Z" level=info msg="StopPodSandbox for \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\" returns successfully" May 13 00:51:08.102340 env[1313]: time="2025-05-13T00:51:08.102269357Z" level=info msg="RemovePodSandbox for \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\"" May 13 00:51:08.102340 env[1313]: time="2025-05-13T00:51:08.102291360Z" level=info msg="Forcibly stopping sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\"" May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.135 [WARNING][4753] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0", GenerateName:"calico-apiserver-857fcd798-", Namespace:"calico-apiserver", SelfLink:"", UID:"16ab6220-b9fb-42eb-b90d-d41f68bb7889", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857fcd798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdfc2500d47cf8d2f3c41ac0669294bb8124949599a274449d4d79079c22d4f0", Pod:"calico-apiserver-857fcd798-fpxlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie554cc45d17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.136 [INFO][4753] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.136 [INFO][4753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" iface="eth0" netns="" May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.136 [INFO][4753] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.136 [INFO][4753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.152 [INFO][4762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" HandleID="k8s-pod-network.fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.153 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.153 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.157 [WARNING][4762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" HandleID="k8s-pod-network.fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.157 [INFO][4762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" HandleID="k8s-pod-network.fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" Workload="localhost-k8s-calico--apiserver--857fcd798--fpxlt-eth0" May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.158 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.161666 env[1313]: 2025-05-13 00:51:08.160 [INFO][4753] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b" May 13 00:51:08.162138 env[1313]: time="2025-05-13T00:51:08.161695415Z" level=info msg="TearDown network for sandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\" successfully" May 13 00:51:08.165049 env[1313]: time="2025-05-13T00:51:08.165018971Z" level=info msg="RemovePodSandbox \"fbb01d7c653ba8595aa23eaeb35b66b3b90d073c3e625835025de8c49f3c698b\" returns successfully" May 13 00:51:08.165694 env[1313]: time="2025-05-13T00:51:08.165643458Z" level=info msg="StopPodSandbox for \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\"" May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.195 [WARNING][4784] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--95lft-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ad75be90-a580-4409-ab7f-57d0bc34975e", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961", Pod:"coredns-7db6d8ff4d-95lft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97c2a3a3064", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.195 [INFO][4784] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.195 [INFO][4784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" iface="eth0" netns="" May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.195 [INFO][4784] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.195 [INFO][4784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.213 [INFO][4793] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" HandleID="k8s-pod-network.842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.213 [INFO][4793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.213 [INFO][4793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.218 [WARNING][4793] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" HandleID="k8s-pod-network.842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.218 [INFO][4793] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" HandleID="k8s-pod-network.842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.219 [INFO][4793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.222814 env[1313]: 2025-05-13 00:51:08.221 [INFO][4784] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:51:08.223283 env[1313]: time="2025-05-13T00:51:08.222842934Z" level=info msg="TearDown network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\" successfully" May 13 00:51:08.223283 env[1313]: time="2025-05-13T00:51:08.222873365Z" level=info msg="StopPodSandbox for \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\" returns successfully" May 13 00:51:08.223390 env[1313]: time="2025-05-13T00:51:08.223358215Z" level=info msg="RemovePodSandbox for \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\"" May 13 00:51:08.223453 env[1313]: time="2025-05-13T00:51:08.223387443Z" level=info msg="Forcibly stopping sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\"" May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.254 [WARNING][4816] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--95lft-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ad75be90-a580-4409-ab7f-57d0bc34975e", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e748e96869586d3081e10a56a3b39c5a8663b57fce881fac43da44ea017dd961", Pod:"coredns-7db6d8ff4d-95lft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97c2a3a3064", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.254 [INFO][4816] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.254 [INFO][4816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" iface="eth0" netns="" May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.254 [INFO][4816] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.254 [INFO][4816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.272 [INFO][4825] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" HandleID="k8s-pod-network.842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.272 [INFO][4825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.273 [INFO][4825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.277 [WARNING][4825] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" HandleID="k8s-pod-network.842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.277 [INFO][4825] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" HandleID="k8s-pod-network.842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" Workload="localhost-k8s-coredns--7db6d8ff4d--95lft-eth0" May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.278 [INFO][4825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.281617 env[1313]: 2025-05-13 00:51:08.280 [INFO][4816] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d" May 13 00:51:08.282087 env[1313]: time="2025-05-13T00:51:08.281645026Z" level=info msg="TearDown network for sandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\" successfully" May 13 00:51:08.284835 env[1313]: time="2025-05-13T00:51:08.284802942Z" level=info msg="RemovePodSandbox \"842a8a11bef8a2e6b8992e63891b8a464cf7e4f575bd311864c19002c174af8d\" returns successfully" May 13 00:51:08.285359 env[1313]: time="2025-05-13T00:51:08.285321400Z" level=info msg="StopPodSandbox for \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\"" May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.314 [WARNING][4848] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d483d5d7-194a-4438-b970-a2e8097bf20a", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685", Pod:"coredns-7db6d8ff4d-8w5v4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63459748fc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.314 [INFO][4848] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.314 [INFO][4848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" iface="eth0" netns="" May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.315 [INFO][4848] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.315 [INFO][4848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.332 [INFO][4856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" HandleID="k8s-pod-network.cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.332 [INFO][4856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.332 [INFO][4856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.337 [WARNING][4856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" HandleID="k8s-pod-network.cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.337 [INFO][4856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" HandleID="k8s-pod-network.cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.342 [INFO][4856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.345641 env[1313]: 2025-05-13 00:51:08.344 [INFO][4848] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:51:08.345641 env[1313]: time="2025-05-13T00:51:08.345608013Z" level=info msg="TearDown network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\" successfully" May 13 00:51:08.345641 env[1313]: time="2025-05-13T00:51:08.345636720Z" level=info msg="StopPodSandbox for \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\" returns successfully" May 13 00:51:08.347110 env[1313]: time="2025-05-13T00:51:08.347064709Z" level=info msg="RemovePodSandbox for \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\"" May 13 00:51:08.347427 env[1313]: time="2025-05-13T00:51:08.347108416Z" level=info msg="Forcibly stopping sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\"" May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.376 [WARNING][4879] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d483d5d7-194a-4438-b970-a2e8097bf20a", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7345da28f1ef664faa6548644c7b718c52ac01be3c9aea875a4ea09ebc5d685", Pod:"coredns-7db6d8ff4d-8w5v4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63459748fc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.376 [INFO][4879] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.376 [INFO][4879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" iface="eth0" netns="" May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.376 [INFO][4879] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.376 [INFO][4879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.395 [INFO][4887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" HandleID="k8s-pod-network.cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.395 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.395 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.400 [WARNING][4887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" HandleID="k8s-pod-network.cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.400 [INFO][4887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" HandleID="k8s-pod-network.cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" Workload="localhost-k8s-coredns--7db6d8ff4d--8w5v4-eth0" May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.402 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.404612 env[1313]: 2025-05-13 00:51:08.403 [INFO][4879] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464" May 13 00:51:08.405147 env[1313]: time="2025-05-13T00:51:08.404635431Z" level=info msg="TearDown network for sandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\" successfully" May 13 00:51:08.580103 env[1313]: time="2025-05-13T00:51:08.580050553Z" level=info msg="RemovePodSandbox \"cbbaa2c6162d08818352aab00b7c42b6c9286238e40b0f8e8808e07d3207d464\" returns successfully" May 13 00:51:08.580501 env[1313]: time="2025-05-13T00:51:08.580466788Z" level=info msg="StopPodSandbox for \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\"" May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.611 [WARNING][4909] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dbllw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e52f16f-64af-4b4e-a240-a749e7055c20", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77", Pod:"csi-node-driver-dbllw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e49f676df0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.611 [INFO][4909] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.611 [INFO][4909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" iface="eth0" netns="" May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.611 [INFO][4909] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.611 [INFO][4909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.628 [INFO][4918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" HandleID="k8s-pod-network.5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.628 [INFO][4918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.628 [INFO][4918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.633 [WARNING][4918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" HandleID="k8s-pod-network.5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.633 [INFO][4918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" HandleID="k8s-pod-network.5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.634 [INFO][4918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.637109 env[1313]: 2025-05-13 00:51:08.635 [INFO][4909] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:08.637109 env[1313]: time="2025-05-13T00:51:08.637076405Z" level=info msg="TearDown network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\" successfully" May 13 00:51:08.637109 env[1313]: time="2025-05-13T00:51:08.637105843Z" level=info msg="StopPodSandbox for \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\" returns successfully" May 13 00:51:08.637589 env[1313]: time="2025-05-13T00:51:08.637545134Z" level=info msg="RemovePodSandbox for \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\"" May 13 00:51:08.637635 env[1313]: time="2025-05-13T00:51:08.637577848Z" level=info msg="Forcibly stopping sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\"" May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.669 [WARNING][4940] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dbllw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e52f16f-64af-4b4e-a240-a749e7055c20", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f6a8b953a5d661f0513ef97e6ce6af066e49a41d8a301629e7963e917a15e77", Pod:"csi-node-driver-dbllw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e49f676df0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.669 [INFO][4940] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.669 [INFO][4940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" iface="eth0" netns="" May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.669 [INFO][4940] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.669 [INFO][4940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.688 [INFO][4948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" HandleID="k8s-pod-network.5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.688 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.688 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.693 [WARNING][4948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" HandleID="k8s-pod-network.5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.693 [INFO][4948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" HandleID="k8s-pod-network.5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" Workload="localhost-k8s-csi--node--driver--dbllw-eth0" May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.694 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.697143 env[1313]: 2025-05-13 00:51:08.695 [INFO][4940] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205" May 13 00:51:08.697563 env[1313]: time="2025-05-13T00:51:08.697174425Z" level=info msg="TearDown network for sandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\" successfully" May 13 00:51:08.700437 env[1313]: time="2025-05-13T00:51:08.700414464Z" level=info msg="RemovePodSandbox \"5dbf4e6efddffbc09bba351f77ff50ea2fa30a9f724ce5eb3397dee59771e205\" returns successfully" May 13 00:51:08.700853 env[1313]: time="2025-05-13T00:51:08.700802443Z" level=info msg="StopPodSandbox for \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\"" May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.729 [WARNING][4970] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0", GenerateName:"calico-kube-controllers-f548f5c9b-", Namespace:"calico-system", SelfLink:"", UID:"717e1a73-0b5d-4ee5-9bae-65be581845ed", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f548f5c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76", Pod:"calico-kube-controllers-f548f5c9b-2tczf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f487c2cc06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.729 [INFO][4970] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.729 [INFO][4970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" iface="eth0" netns="" May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.729 [INFO][4970] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.729 [INFO][4970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.747 [INFO][4979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" HandleID="k8s-pod-network.69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.747 [INFO][4979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.747 [INFO][4979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.752 [WARNING][4979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" HandleID="k8s-pod-network.69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.752 [INFO][4979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" HandleID="k8s-pod-network.69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.753 [INFO][4979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.756258 env[1313]: 2025-05-13 00:51:08.755 [INFO][4970] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:51:08.756258 env[1313]: time="2025-05-13T00:51:08.756245149Z" level=info msg="TearDown network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\" successfully" May 13 00:51:08.756903 env[1313]: time="2025-05-13T00:51:08.756270208Z" level=info msg="StopPodSandbox for \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\" returns successfully" May 13 00:51:08.756903 env[1313]: time="2025-05-13T00:51:08.756779296Z" level=info msg="RemovePodSandbox for \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\"" May 13 00:51:08.756903 env[1313]: time="2025-05-13T00:51:08.756815588Z" level=info msg="Forcibly stopping sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\"" May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.785 [WARNING][5002] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0", GenerateName:"calico-kube-controllers-f548f5c9b-", Namespace:"calico-system", SelfLink:"", UID:"717e1a73-0b5d-4ee5-9bae-65be581845ed", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f548f5c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14255f8d0988fad3486991fd84091dd569942e388e4a08e29d3b81ea198aca76", Pod:"calico-kube-controllers-f548f5c9b-2tczf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f487c2cc06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.785 [INFO][5002] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.785 [INFO][5002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" iface="eth0" netns="" May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.785 [INFO][5002] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.785 [INFO][5002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.802 [INFO][5012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" HandleID="k8s-pod-network.69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.802 [INFO][5012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.803 [INFO][5012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.807 [WARNING][5012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" HandleID="k8s-pod-network.69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.807 [INFO][5012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" HandleID="k8s-pod-network.69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" Workload="localhost-k8s-calico--kube--controllers--f548f5c9b--2tczf-eth0" May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.808 [INFO][5012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.810874 env[1313]: 2025-05-13 00:51:08.809 [INFO][5002] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021" May 13 00:51:08.811307 env[1313]: time="2025-05-13T00:51:08.810904381Z" level=info msg="TearDown network for sandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\" successfully" May 13 00:51:08.814420 env[1313]: time="2025-05-13T00:51:08.814368315Z" level=info msg="RemovePodSandbox \"69b0e2926f7c73e466760c7a3a7ed828d02b318999d84df985d95e515f465021\" returns successfully" May 13 00:51:08.814821 env[1313]: time="2025-05-13T00:51:08.814790912Z" level=info msg="StopPodSandbox for \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\"" May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.844 [WARNING][5035] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0", GenerateName:"calico-apiserver-857fcd798-", Namespace:"calico-apiserver", SelfLink:"", UID:"575e7f4c-4b2b-4b60-8634-168da3235e29", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857fcd798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b", Pod:"calico-apiserver-857fcd798-pcg4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ab45acee7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.844 [INFO][5035] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.844 [INFO][5035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" iface="eth0" netns="" May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.845 [INFO][5035] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.845 [INFO][5035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.862 [INFO][5044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" HandleID="k8s-pod-network.b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.862 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.862 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.867 [WARNING][5044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" HandleID="k8s-pod-network.b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.867 [INFO][5044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" HandleID="k8s-pod-network.b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.868 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.871269 env[1313]: 2025-05-13 00:51:08.869 [INFO][5035] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:51:08.871751 env[1313]: time="2025-05-13T00:51:08.871282095Z" level=info msg="TearDown network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\" successfully" May 13 00:51:08.871751 env[1313]: time="2025-05-13T00:51:08.871313597Z" level=info msg="StopPodSandbox for \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\" returns successfully" May 13 00:51:08.871833 env[1313]: time="2025-05-13T00:51:08.871800182Z" level=info msg="RemovePodSandbox for \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\"" May 13 00:51:08.871874 env[1313]: time="2025-05-13T00:51:08.871839169Z" level=info msg="Forcibly stopping sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\"" May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.901 [WARNING][5066] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0", GenerateName:"calico-apiserver-857fcd798-", Namespace:"calico-apiserver", SelfLink:"", UID:"575e7f4c-4b2b-4b60-8634-168da3235e29", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857fcd798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78751dac17450c246deeec23396a0c2fa3561cfb222be0f4efb481a23b82829b", Pod:"calico-apiserver-857fcd798-pcg4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ab45acee7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.901 [INFO][5066] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.901 [INFO][5066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" iface="eth0" netns="" May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.901 [INFO][5066] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.901 [INFO][5066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.919 [INFO][5075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" HandleID="k8s-pod-network.b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.919 [INFO][5075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.919 [INFO][5075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.924 [WARNING][5075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" HandleID="k8s-pod-network.b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.924 [INFO][5075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" HandleID="k8s-pod-network.b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" Workload="localhost-k8s-calico--apiserver--857fcd798--pcg4l-eth0" May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.926 [INFO][5075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:51:08.928597 env[1313]: 2025-05-13 00:51:08.927 [INFO][5066] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce" May 13 00:51:08.929179 env[1313]: time="2025-05-13T00:51:08.928621228Z" level=info msg="TearDown network for sandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\" successfully" May 13 00:51:08.931865 env[1313]: time="2025-05-13T00:51:08.931838142Z" level=info msg="RemovePodSandbox \"b4f2b6e1b0132d3a95b6e55f57c648e970f774be524c5277e174e1c7c9a836ce\" returns successfully" May 13 00:51:10.948803 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:47868.service. May 13 00:51:10.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.140:22-10.0.0.1:47868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:10.950013 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 00:51:10.950065 kernel: audit: type=1130 audit(1747097470.947:487): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.140:22-10.0.0.1:47868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:10.980000 audit[5085]: USER_ACCT pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:10.981659 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 47868 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:10.983273 sshd[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:10.981000 audit[5085]: CRED_ACQ pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:10.986743 systemd-logind[1296]: New session 17 of user core. May 13 00:51:10.987461 systemd[1]: Started session-17.scope. May 13 00:51:10.988834 kernel: audit: type=1101 audit(1747097470.980:488): pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:10.988961 kernel: audit: type=1103 audit(1747097470.981:489): pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:10.981000 audit[5085]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd33ba6240 a2=3 a3=0 items=0 ppid=1 pid=5085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:10.995135 kernel: audit: type=1006 audit(1747097470.981:490): pid=5085 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 13 00:51:10.995197 kernel: audit: type=1300 audit(1747097470.981:490): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd33ba6240 a2=3 a3=0 items=0 ppid=1 pid=5085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:10.995216 kernel: audit: type=1327 audit(1747097470.981:490): proctitle=737368643A20636F7265205B707269765D May 13 00:51:10.981000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:10.991000 audit[5085]: USER_START pid=5085 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.000525 kernel: audit: type=1105 audit(1747097470.991:491): pid=5085 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.000566 kernel: audit: type=1103 audit(1747097470.992:492): pid=5088 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:10.992000 audit[5088]: CRED_ACQ pid=5088 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.096896 sshd[5085]: pam_unix(sshd:session): session closed for user core May 13 00:51:11.099467 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:47882.service. May 13 00:51:11.099894 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:47868.service: Deactivated successfully. May 13 00:51:11.096000 audit[5085]: USER_END pid=5085 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.104106 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:51:11.097000 audit[5085]: CRED_DISP pid=5085 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.105043 systemd-logind[1296]: Session 17 logged out. Waiting for processes to exit. May 13 00:51:11.105898 systemd-logind[1296]: Removed session 17. May 13 00:51:11.107593 kernel: audit: type=1106 audit(1747097471.096:493): pid=5085 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.107655 kernel: audit: type=1104 audit(1747097471.097:494): pid=5085 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.140:22-10.0.0.1:47882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:11.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.140:22-10.0.0.1:47868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:11.131000 audit[5097]: USER_ACCT pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.132965 sshd[5097]: Accepted publickey for core from 10.0.0.1 port 47882 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:11.132000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.132000 audit[5097]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff17b1f780 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:11.132000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:11.133778 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:11.137010 systemd-logind[1296]: New session 18 of user core. May 13 00:51:11.137781 systemd[1]: Started session-18.scope. May 13 00:51:11.140000 audit[5097]: USER_START pid=5097 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.142000 audit[5102]: CRED_ACQ pid=5102 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.307771 sshd[5097]: pam_unix(sshd:session): session closed for user core May 13 00:51:11.307000 audit[5097]: USER_END pid=5097 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.308000 audit[5097]: CRED_DISP pid=5097 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.310511 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:47890.service. May 13 00:51:11.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.140:22-10.0.0.1:47890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:11.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.140:22-10.0.0.1:47882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:11.311689 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:47882.service: Deactivated successfully. May 13 00:51:11.312823 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:51:11.313303 systemd-logind[1296]: Session 18 logged out. Waiting for processes to exit. May 13 00:51:11.314144 systemd-logind[1296]: Removed session 18. May 13 00:51:11.342000 audit[5109]: USER_ACCT pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.343572 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 47890 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:11.343000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.343000 audit[5109]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd09ab6190 a2=3 a3=0 items=0 ppid=1 pid=5109 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:11.343000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:11.344323 sshd[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:11.347640 systemd-logind[1296]: New session 19 of user core. May 13 00:51:11.348357 systemd[1]: Started session-19.scope. May 13 00:51:11.351000 audit[5109]: USER_START pid=5109 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:11.352000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:12.851000 audit[5127]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5127 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:12.851000 audit[5127]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc14c96880 a2=0 a3=7ffc14c9686c items=0 ppid=2377 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:12.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:12.856000 audit[5127]: NETFILTER_CFG table=nat:122 family=2 entries=22 op=nft_register_rule pid=5127 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:12.856000 audit[5127]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc14c96880 a2=0 a3=0 items=0 ppid=2377 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:12.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:12.859258 sshd[5109]: pam_unix(sshd:session): session closed for user core May 13 00:51:12.860000 audit[5109]: USER_END pid=5109 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:12.860000 audit[5109]: CRED_DISP pid=5109 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:12.861606 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:47904.service. May 13 00:51:12.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.140:22-10.0.0.1:47904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:12.862378 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:47890.service: Deactivated successfully. May 13 00:51:12.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.140:22-10.0.0.1:47890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:12.863387 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:51:12.864015 systemd-logind[1296]: Session 19 logged out. Waiting for processes to exit. May 13 00:51:12.865194 systemd-logind[1296]: Removed session 19. May 13 00:51:12.888000 audit[5133]: NETFILTER_CFG table=filter:123 family=2 entries=32 op=nft_register_rule pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:12.888000 audit[5133]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffa766f010 a2=0 a3=7fffa766effc items=0 ppid=2377 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:12.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:12.894000 audit[5133]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:12.894000 audit[5133]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffa766f010 a2=0 a3=0 items=0 ppid=2377 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:12.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:12.901000 audit[5128]: USER_ACCT pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:12.902465 sshd[5128]: Accepted publickey for core from 10.0.0.1 port 47904 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:12.902000 audit[5128]: CRED_ACQ pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:12.902000 audit[5128]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce0dd5c30 a2=3 a3=0 items=0 ppid=1 pid=5128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:12.902000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:12.904088 sshd[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:12.907255 systemd-logind[1296]: New session 20 of user core. May 13 00:51:12.907988 systemd[1]: Started session-20.scope. May 13 00:51:12.911000 audit[5128]: USER_START pid=5128 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:12.912000 audit[5135]: CRED_ACQ pid=5135 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:13.145739 sshd[5128]: pam_unix(sshd:session): session closed for user core May 13 00:51:13.147847 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:47912.service. May 13 00:51:13.151563 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:47904.service: Deactivated successfully. May 13 00:51:13.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.140:22-10.0.0.1:47912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:13.149000 audit[5128]: USER_END pid=5128 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:13.149000 audit[5128]: CRED_DISP pid=5128 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:13.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.140:22-10.0.0.1:47904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:13.153566 systemd-logind[1296]: Session 20 logged out. Waiting for processes to exit. May 13 00:51:13.154021 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:51:13.155616 systemd-logind[1296]: Removed session 20. May 13 00:51:13.176000 audit[5142]: USER_ACCT pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:13.177549 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 47912 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:13.177000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:13.177000 audit[5142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbb822690 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:13.177000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:13.178740 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:13.181854 systemd-logind[1296]: New session 21 of user core. May 13 00:51:13.182664 systemd[1]: Started session-21.scope. May 13 00:51:13.185000 audit[5142]: USER_START pid=5142 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:13.186000 audit[5147]: CRED_ACQ pid=5147 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:13.287528 sshd[5142]: pam_unix(sshd:session): session closed for user core May 13 00:51:13.287000 audit[5142]: USER_END pid=5142 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:13.287000 audit[5142]: CRED_DISP pid=5142 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:13.290031 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:47912.service: Deactivated successfully. May 13 00:51:13.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.140:22-10.0.0.1:47912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:13.290991 systemd-logind[1296]: Session 21 logged out. Waiting for processes to exit. May 13 00:51:13.291075 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:51:13.291740 systemd-logind[1296]: Removed session 21. May 13 00:51:18.290839 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:53600.service. May 13 00:51:18.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.140:22-10.0.0.1:53600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:18.295118 kernel: kauditd_printk_skb: 57 callbacks suppressed May 13 00:51:18.295169 kernel: audit: type=1130 audit(1747097478.290:536): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.140:22-10.0.0.1:53600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:18.318000 audit[5183]: USER_ACCT pid=5183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.319877 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 53600 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:18.321404 sshd[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:18.320000 audit[5183]: CRED_ACQ pid=5183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.324380 systemd-logind[1296]: New session 22 of user core. May 13 00:51:18.325069 systemd[1]: Started session-22.scope. May 13 00:51:18.327269 kernel: audit: type=1101 audit(1747097478.318:537): pid=5183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.327317 kernel: audit: type=1103 audit(1747097478.320:538): pid=5183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.329609 kernel: audit: type=1006 audit(1747097478.320:539): pid=5183 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 May 13 00:51:18.329676 kernel: audit: type=1300 audit(1747097478.320:539): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7083c610 a2=3 a3=0 items=0 ppid=1 pid=5183 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:18.320000 audit[5183]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7083c610 a2=3 a3=0 items=0 ppid=1 pid=5183 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:18.320000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:18.334855 kernel: audit: type=1327 audit(1747097478.320:539): proctitle=737368643A20636F7265205B707269765D May 13 00:51:18.334889 kernel: audit: type=1105 audit(1747097478.328:540): pid=5183 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.328000 audit[5183]: USER_START pid=5183 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.338976 kernel: audit: type=1103 audit(1747097478.329:541): pid=5186 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.329000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.422867 sshd[5183]: pam_unix(sshd:session): session closed for user core May 13 00:51:18.422000 audit[5183]: USER_END pid=5183 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.425512 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:53600.service: Deactivated successfully. May 13 00:51:18.426302 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:51:18.427161 systemd-logind[1296]: Session 22 logged out. Waiting for processes to exit. May 13 00:51:18.427796 systemd-logind[1296]: Removed session 22. May 13 00:51:18.423000 audit[5183]: CRED_DISP pid=5183 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.431502 kernel: audit: type=1106 audit(1747097478.422:542): pid=5183 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.431552 kernel: audit: type=1104 audit(1747097478.423:543): pid=5183 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:18.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.140:22-10.0.0.1:53600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:18.882000 audit[5198]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:18.882000 audit[5198]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd9de3d140 a2=0 a3=7ffd9de3d12c items=0 ppid=2377 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:18.882000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:18.888000 audit[5198]: NETFILTER_CFG table=nat:126 family=2 entries=106 op=nft_register_chain pid=5198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 00:51:18.888000 audit[5198]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd9de3d140 a2=0 a3=7ffd9de3d12c items=0 ppid=2377 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:18.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 00:51:23.425371 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:53604.service. May 13 00:51:23.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.140:22-10.0.0.1:53604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:23.426539 kernel: kauditd_printk_skb: 7 callbacks suppressed May 13 00:51:23.426659 kernel: audit: type=1130 audit(1747097483.424:547): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.140:22-10.0.0.1:53604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:23.453000 audit[5203]: USER_ACCT pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.454526 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 53604 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:23.456392 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:23.455000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.459655 systemd-logind[1296]: New session 23 of user core. May 13 00:51:23.460307 systemd[1]: Started session-23.scope. May 13 00:51:23.461679 kernel: audit: type=1101 audit(1747097483.453:548): pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.461730 kernel: audit: type=1103 audit(1747097483.455:549): pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.461750 kernel: audit: type=1006 audit(1747097483.455:550): pid=5203 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 13 00:51:23.455000 audit[5203]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe35bfbc50 a2=3 a3=0 items=0 ppid=1 pid=5203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:23.467782 kernel: audit: type=1300 audit(1747097483.455:550): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe35bfbc50 a2=3 a3=0 items=0 ppid=1 pid=5203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:23.467830 kernel: audit: type=1327 audit(1747097483.455:550): proctitle=737368643A20636F7265205B707269765D May 13 00:51:23.455000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:23.469058 kernel: audit: type=1105 audit(1747097483.463:551): pid=5203 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.463000 audit[5203]: USER_START pid=5203 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.464000 audit[5206]: CRED_ACQ pid=5206 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.476515 kernel: audit: type=1103 audit(1747097483.464:552): pid=5206 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.555469 sshd[5203]: pam_unix(sshd:session): session closed for user core May 13 00:51:23.555000 audit[5203]: USER_END pid=5203 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.557656 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:53604.service: Deactivated successfully. May 13 00:51:23.558612 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:51:23.558654 systemd-logind[1296]: Session 23 logged out. Waiting for processes to exit. May 13 00:51:23.559435 systemd-logind[1296]: Removed session 23. May 13 00:51:23.555000 audit[5203]: CRED_DISP pid=5203 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.563933 kernel: audit: type=1106 audit(1747097483.555:553): pid=5203 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.564101 kernel: audit: type=1104 audit(1747097483.555:554): pid=5203 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:23.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.140:22-10.0.0.1:53604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:27.292863 kubelet[2213]: E0513 00:51:27.292825 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:51:27.303679 kubelet[2213]: I0513 00:51:27.303629 2213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dbllw" podStartSLOduration=52.409369655 podStartE2EDuration="58.303601481s" podCreationTimestamp="2025-05-13 00:50:29 +0000 UTC" firstStartedPulling="2025-05-13 00:51:00.71136663 +0000 UTC m=+52.727505233" lastFinishedPulling="2025-05-13 00:51:06.605598446 +0000 UTC m=+58.621737059" observedRunningTime="2025-05-13 00:51:07.657638922 +0000 UTC m=+59.673777535" watchObservedRunningTime="2025-05-13 00:51:27.303601481 +0000 UTC m=+79.319740094" May 13 00:51:28.558130 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:44652.service. May 13 00:51:28.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.140:22-10.0.0.1:44652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:28.559456 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 00:51:28.559507 kernel: audit: type=1130 audit(1747097488.557:556): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.140:22-10.0.0.1:44652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:28.587000 audit[5242]: USER_ACCT pid=5242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.589068 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 44652 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:28.590333 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:28.596196 kernel: audit: type=1101 audit(1747097488.587:557): pid=5242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.596297 kernel: audit: type=1103 audit(1747097488.589:558): pid=5242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.589000 audit[5242]: CRED_ACQ pid=5242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.593817 systemd-logind[1296]: New session 24 of user core. May 13 00:51:28.594584 systemd[1]: Started session-24.scope. May 13 00:51:28.598983 kernel: audit: type=1006 audit(1747097488.589:559): pid=5242 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 13 00:51:28.599029 kernel: audit: type=1300 audit(1747097488.589:559): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff89650260 a2=3 a3=0 items=0 ppid=1 pid=5242 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:28.589000 audit[5242]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff89650260 a2=3 a3=0 items=0 ppid=1 pid=5242 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:28.589000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:28.604137 kernel: audit: type=1327 audit(1747097488.589:559): proctitle=737368643A20636F7265205B707269765D May 13 00:51:28.604170 kernel: audit: type=1105 audit(1747097488.597:560): pid=5242 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.597000 audit[5242]: USER_START pid=5242 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.598000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.611608 kernel: audit: type=1103 audit(1747097488.598:561): pid=5245 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.710890 sshd[5242]: pam_unix(sshd:session): session closed for user core May 13 00:51:28.710000 audit[5242]: USER_END pid=5242 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.713162 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:44652.service: Deactivated successfully. May 13 00:51:28.714179 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:51:28.714630 systemd-logind[1296]: Session 24 logged out. Waiting for processes to exit. May 13 00:51:28.715368 systemd-logind[1296]: Removed session 24. May 13 00:51:28.710000 audit[5242]: CRED_DISP pid=5242 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.719422 kernel: audit: type=1106 audit(1747097488.710:562): pid=5242 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.719509 kernel: audit: type=1104 audit(1747097488.710:563): pid=5242 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:28.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.140:22-10.0.0.1:44652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:29.056140 kubelet[2213]: E0513 00:51:29.056094 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:51:31.055896 kubelet[2213]: E0513 00:51:31.055860 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:51:33.713665 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:52120.service. May 13 00:51:33.718982 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 00:51:33.719067 kernel: audit: type=1130 audit(1747097493.712:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.140:22-10.0.0.1:52120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:33.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.140:22-10.0.0.1:52120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:51:33.745000 audit[5257]: USER_ACCT pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.746475 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 52120 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:51:33.749859 sshd[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:51:33.748000 audit[5257]: CRED_ACQ pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.753332 systemd-logind[1296]: New session 25 of user core. May 13 00:51:33.753797 kernel: audit: type=1101 audit(1747097493.745:566): pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.753834 kernel: audit: type=1103 audit(1747097493.748:567): pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.754048 systemd[1]: Started session-25.scope. May 13 00:51:33.756160 kernel: audit: type=1006 audit(1747097493.748:568): pid=5257 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 13 00:51:33.756205 kernel: audit: type=1300 audit(1747097493.748:568): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1319b8d0 a2=3 a3=0 items=0 ppid=1 pid=5257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:33.748000 audit[5257]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1319b8d0 a2=3 a3=0 items=0 ppid=1 pid=5257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:51:33.748000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 00:51:33.761672 kernel: audit: type=1327 audit(1747097493.748:568): proctitle=737368643A20636F7265205B707269765D May 13 00:51:33.761725 kernel: audit: type=1105 audit(1747097493.757:569): pid=5257 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.757000 audit[5257]: USER_START pid=5257 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.759000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.769441 kernel: audit: type=1103 audit(1747097493.759:570): pid=5260 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.896008 sshd[5257]: pam_unix(sshd:session): session closed for user core May 13 00:51:33.895000 audit[5257]: USER_END pid=5257 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.898440 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:52120.service: Deactivated successfully. May 13 00:51:33.899508 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:51:33.899582 systemd-logind[1296]: Session 25 logged out. Waiting for processes to exit. May 13 00:51:33.900805 systemd-logind[1296]: Removed session 25. May 13 00:51:33.895000 audit[5257]: CRED_DISP pid=5257 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.904537 kernel: audit: type=1106 audit(1747097493.895:571): pid=5257 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.904603 kernel: audit: type=1104 audit(1747097493.895:572): pid=5257 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 13 00:51:33.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.140:22-10.0.0.1:52120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'